- Mar 13, 2025
-
-
tmpolaczyk authored
Shouldn't matter much, but this is run on every produced block so free performance
-
PG Herveou authored
Add missing pre-compiles 02 -> 09 [weights changes](https://weights.tasty.limo/compare?repo=polkadot-sdk&threshold=10&path_pattern=substrate%2Fframe%2F**%2Fsrc%2Fweights.rs%2Cpolkadot%2Fruntime%2F*%2Fsrc%2Fweights%2F**%2F*.rs%2Cpolkadot%2Fbridges%2Fmodules%2F*%2Fsrc%2Fweights.rs%2Ccumulus%2F**%2Fweights%2F*.rs%2Ccumulus%2F**%2Fweights%2Fxcm%2F*.rs%2Ccumulus%2F**%2Fsrc%2Fweights.rs&method=asymptotic&ignore_errors=true&unit=time&old=master&new=pg%2Fprecompiles02_09&pallet=revive) --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
PG Herveou authored
Update pallet-revive-fixtures so that it can build without looking up dependencies from the workspace --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
Alexandru Vasile authored
This PR punishes behaviors that deviate from the notification spec. When a peer misbehaves by writing data on an unidirectional read stream, the peer is banned and disconnected immediately. In this PR: - The `NotificationOutError` is enriched with termination reason and made publically available for higher levels - The protocol misbehavior is propagated through the `CloseDesired` events - The network behavior of the protocol is responsible for banning the peer. - The peer is banned immediately and, as a result, the reputation system disconnects the malicious / misbehaving peer - Logs are enriched with protocol names Closes: https://github.com/paritytech/polkadot-sdk/issues/7722 cc @paritytech/networking --------- Signed-off-by:
Alexandru Vasile <alexandru.vasile@parity.io> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
PG Herveou authored
Support "latest" blocktag in ethGetLogs from_block and to_block parameters This is not in specs (https://github.com/ethereum/execution-apis/blob/main/src/schemas/filter.yaml#L17) but defined and used by 3rd parties and in some other reference docs See https://docs.metamask.io/services/reference/ethereum/json-rpc-methods/eth_getlogs/ --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
thiolliere authored
giving the wrong origin in `extrinsic_call` would result in: ``` | 43 | #[benchmarks] | ^^^^^^^^^^^^^ | | | expected associated type, found `Result<RawOrigin<...>, ...>` | arguments to this function are incorrect | = note: expected associated type `<T as frame_system::Config>::RuntimeOrigin` found enum `Result<RawOrigin<<T as frame_system::Config>::AccountId>, <T as frame_system::Config>::RuntimeOrigin>` note: method defined here --> $WORKSPACE/substrate/frame/support/src/traits/dispatch.rs | | fn dispatch_bypass_filter(self, origin: Self::RuntimeOrigin) -> DispatchResultWithPostInfo; | ^^^^^^^^^^^^^^^^^^^^^^ = note: this error originates in the attribute macro `benchmarks` (in Nightly builds, run with -Z macro-backtrace for more info) ``` Now it results in an error message with good span. --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
ordian authored
This PR adds a convenience extrinsic `manual_slash` for the governance to slash a validator manually. ## Changes * The `on_offence` implementation for the Staking pallet accepts a slice of `OffenceDetails` including the full validator exposure, however, it simply [ignores](https://github.com/paritytech/polkadot-sdk/blob/c8d33396 /substrate/frame/staking/src/pallet/impls.rs#L1864) that part. I've extracted the functionality into an inherent `on_offence` method that takes `OffenceDetails` without the full exposure and this is called directly in `manual_slash` * `manual_slash` creates an offence for a validator with a given slash percentange ## Questions - [x] should `manual_slash` accept session instead of an era when the validator was in the active set? staking thinks in terms of eras and we can check out of bounds this way, which is why it was chosen for this implementation, but if there are arguments against, happy to change to session index - [X] should the accepted origin be something more than just root? Changed to `T::AdminOrigin` to align with `cancel_deferred_slash` - [X] should I adapt this PR also against https://github.com/paritytech/polkadot-sdk/pull/6996? looking at the changes, it should apply mostly without conflicts --------- Co-authored-by:
Tsvetomir Dimitrov <tsvetomir@parity.io> Co-authored-by:
Ankan <10196091+Ank4n@users.noreply.github.com>
-
- Mar 12, 2025
-
-
Ankan authored
## Summary The existing fungible migration code has an issue when handling partially unbonding accounts, leaving them in an inconsistent state. These changes fix it by properly withdrawing overstake from unlock chunks. This PR also removes the `withdraw_overstake` extrinsic from pallet-staking, as this scenario could only occur before the fungible migration. With fungibles, over-staking is no longer possible. ## TODO - [ ] Backport to stable2503. --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
Kian Paimani authored
Co-authored-by:
Ankan <ankan.anurag@gmail.com>
-
- Mar 11, 2025
-
-
Alexandru Vasile authored
This PR makes the litep2p backend the default network backend in Kusama. We performed a gradual rollout in Kusama by asking validators to manually switch to litep2p. The rollout went smoothly, with 250 validators running litep2p without issues. This PR represents the next step in testing the backend at scale. Thanks to everyone who contributed to making this happen! A special shoutout to the validators for their prompt support and cooperation
🙏 While at it, the litep2p release is bumped to the latest 0.9.2, which downgrades a spamming log to debug. ### CLI Testing Done ``` ### Kusama without network backend specified RUST_LOG=info ./target/release/polkadot --chain kusama --pruning=1000 --in-peers 50 --out-peers 50 --sync=warp --detailed-log-output 2025-03-10 14:24:18.503 INFO main sub-libp2p: Running litep2p network backend ### Kusama with libp2p RUST_LOG=info ./target/release/polkadot --chain kusama --pruning=1000 --in-peers 50 --out-peers 50 --sync=warp --detailed-log-output --network-backend libp2p INFO main sub-libp2p: Running libp2p network backend ### Kusama with litep2p RUST_LOG=info ./target/release/polkadot --chain kusama --pruning=1000 --in-peers 50 --out-peers 50 --sync=warp --detailed-log-output --network-backend litep2p INFO main sub-libp2p: Running litep2p network backend ### Polkadot without network backend specified RUST_LOG=info ./target/release/polkadot --chain polkadot --pruning=1000 --in-peers 50 --out-peers 50 --sync=warp --detailed-log-output 2025-03-10 14:27:03.762 INFO main sub-libp2p: Running libp2p network backend ``` cc @paritytech/networking --------- Signed-off-by:Alexandru Vasile <alexandru.vasile@parity.io> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
PG Herveou authored
[pallet-revive] Add support for eip1898 block notation https://eips.ethereum.org/EIPS/eip-1898 --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
- Mar 10, 2025
-
-
Bastian Köcher authored
As proposed by Gui :)
-
jpserrat authored
Closes #6851 This PR adds an `EventEmitter` trait to the XCM Executor configuration, enabling event emission for XCM handling. The implementation introduces three dedicated functions to emit relevant events: - `emit_sent_event`: Emits a `Sent` event when an XCM is successfully sent. - `emit_send_failure_event`: Emits a `SendFailed` event when an XCM fails to send. - `emit_process_failure_event`: Emits a `ProcessXcmError` event when an XCM fails during processing. Kusama address: FkB6QEo8VnV3oifugNj5NeVG3Mvq1zFbrUu4P5YwRoe5mQN --------- Co-authored-by:
Raymond Cheung <178801527+raymondkfcheung@users.noreply.github.com> Co-authored-by:
Adrian Catangiu <adrian@parity.io> Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
Bastian Köcher authored
Close: https://github.com/paritytech/polkadot-sdk/issues/7816 --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
clangenb authored
Closes #7845 --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by:
Guillaume Thiolliere <guillaume.thiolliere@parity.io>
-
- Mar 07, 2025
-
-
girazoki authored
# Description Chains like moonbeam work with an ED deposit of 0 (insecure-ed-0) which is unsable with the current pallet-transaction-payment benchmark. This PR adds an if-else case in case the existential deposit found is 0.
-
PG Herveou authored
Allow using the legacy data field for GenericTransaction
-
Michal Kucharczyk authored
#### PR Description This pull request introduces measures to handle finality stalls by : - notifying outdated transactions with a [`FinalityTimeout`](https://github.com/paritytech/polkadot-sdk/blob/d821c84d/substrate/client/transaction-pool/api/src/lib.rs#L145-L147) event. - removing outdated views from the `view_store` An item is considered _outdated_ when the difference between its associated block and the current block exceeds a pre-defined threshold. #### Note for Reviewers The core logic is provided in the following small commits: - `ViewStore`: new method [`finality_stall_view_cleanup`](https://github.com/paritytech/polkadot-sdk/blob/d821c84d/substrate/client/transaction-pool/src/fork_aware_txpool/view_store.rs#L869-L903) for removing stale views was added: 64267000 - `ForkAwareTransactionPool`: core logic for tracking finality stalls added here: 7b37ea6f. Entry point in [`finality_stall_cleanup`](https://github.com/paritytech/polkadot-sdk/blob/d821c84d/substrate/client/transaction-pool/src/fork_aware_txpool/fork_aware_txpool.rs#L1096-L1136) - Some related renaming was made to better reflect purpose/shorten the names: 1a3a1284, a511601f. Also new method [`transactions_finality_timeout`](https://github.com/paritytech/polkadot-sdk/blob/a511601f/substrate/client/transaction-pool/src/fork_aware_txpool/multi_view_listener.rs#L771-L790) for triggering external events was added for `MultiViewListener`. - `included_transactions` which basically is mapping `block hash -> included transactions hashes`, is also used to find to included transactions. I also sneaked in some minor improvements: - fixed per-transaction logging: 1572f721 - `handle_pre_finalized` method was removed, it was some old leftover which is no longer needed: a6f84ad0 , closes: #5482 --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by:
Sebastian Kunert <skunert49@gmail.com> Co-authored-by:
Iulian Barbu <14218860+iulianbarbu@users.noreply.github.com>
-
Iulian Barbu authored
# Description Builds up towards addressing #5497 by creating some zombienet-sdk code infra that can be used to spin regular networks, as described in the fork aware transaction pool testing setup added here #7100. It will be used for developing tests against such networks, and to also spawn them on demand locally through tooling that will be developed in follow ups. ## Integration Node/runtime developers can run tests based on the zombienet-sdk infra that spins frequently used networks which can be used for analyzing behavior of various node related components, like fork aware transaction pool. ## Review Notes - Uses ttxt API implemented here: https://github.com/michalkucharczyk/tx-test-tool/pull/22/files - currently, only two test scenarios are considered: 10k future & 10k ready txs are sent to two separate networks - one parachain and one relaychain, asserting at the end on the finalization of all 20k txs on both networks. --------- Sign...
-
Bastian Köcher authored
This pr ensures that we remove the `authorization` for a runtime upgrade if the version check failed. If that check is failing, it means that the runtime upgrade is invalid and the check will never succeed. Besides that the pr is doing some clean ups. --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
- Mar 06, 2025
-
-
thiolliere authored
If the inner transaction extension used inside `SkipCheckIfFeeless` are multiples then the metadata is not correct, it is now fixed. E.g. if the transaction extension is `SkipCheckIfFeeless::<Runtime, (Payment1, Payment2)>` then the metadata was wrong. --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Utkarsh Bhardwaj authored
# Description * This PR adds a new extrinsic `poke_deposit` to `pallet-proxy`. This extrinsic will be used to re-adjust the deposits made in the pallet to create a proxy or to create an announcement. * Part of #5591 ## Review Notes * Added a new extrinsic `poke_deposit` in `pallet-proxy`. * This extrinsic checks and adjusts the deposits made for either creating a proxy or creating an announcement or both. * Added a new event `DepositPoked` to be emitted upon a successful call of the extrinsic. * Although the immediate use of the extrinsic will be to give back some of the deposit after the AH-migration, the extrinsic is written such that it can work if the deposit decreases or increases (both). * The call to the extrinsic would be `free` if an actual adjustment is made to the deposit for creating a proxy or to the deposit for creating an announcement or both and `paid` otherwise (when no deposit is changed). * Added a new enum `DepositKind` to differen...
-
Raymond Cheung authored
This PR enhances **`test_log_capture`**, ensuring logs are **captured for assertions** and **printed to the console** during test execution. ## **Motivation** - Partially addresses #6119 and #6125, to improves developer **tracing and debugging** in XCM-related tests. - Builds on #7594, improving **log visibility** while maintaining test **log capture capabilities**. - While writing tests for #7234, I noticed this function was missing. This PR adds it to streamline log handling in unit tests. ## **Changes** - Ensures logs up to `TRACE` level are **captured** (for assertions) and **printed** (for visibility). - Refines documentation to clearly specify **when to use** each function. - **Removes ANSI escape codes** from captured logs to ensure clean, readable assertions. ## **When to Use?** | Usage | Captures Logs? | Prints Logs? | Example | |----------------------------------------------|-----------------|--------------|-----------------------------------------------| | `init_log_capture(LevelFilter::INFO, false)` |
✅ Yes |❌ No | Capture logs for assertions without printing. | | `init_log_capture(LevelFilter::TRACE, true)` |✅ Yes |✅ Yes | Capture logs and print them in test output. | | `sp_tracing::init_for_tests()` |❌ No |✅ Yes | Print logs to the console without capturing. | --------- Co-authored-by:cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
PG Herveou authored
Small tweaks to the eth-rpc-tester bin --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
- Mar 05, 2025
-
-
Bastian Köcher authored
Right now `pallet-scheduler` is not putting back postponed tasks into the agenda when the early weight check is failing. This pull request ensures that these tasks are put back into the agenda and are not just "lost". --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by:
Guillaume Thiolliere <gui.thiolliere@gmail.com> Co-authored-by:
Alexandre R. Baldé <alexandre.balde@parity.io>
-
Oliver Tale-Yazdi authored
Make some more stuff public that will be useful for AHM to reduce code duplication. --------- Signed-off-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
Alexandru Vasile authored
The tokio instant timer produced an overflow when the `poll_tick` was called because the timer period was set to `u64::max`. The period is reduced to accommodate for the following tokio time addition: Source code [tokio/time/interval.rs](https://github.com/tokio-rs/tokio/blob/a2b12bd5799f06e912b32ac05a5ffb5cf1fe31cd/tokio/src/time/interval.rs#L478-L485): ```rust let next = if now > timeout + Duration::from_millis(5) { self.missed_tick_behavior .next_timeout(timeout, now, self.period) } else { timeout .checked_add(self.period) .unwrap_or_else(Instant::far_future) }; ``` Detected by: https://github.com/paritytech/polkadot-sdk/actions/runs/13648141251/job/38150825582?pr=7790 ``` ──── TRY 1 STDERR: sc-network protocol::notifications::tests::conformance::litep2p_disconnects_libp2p_substream thread 'protocol::notifications::tests::conformance::litep2p_disconnects_libp2p_substream' panicked at std/src/time.rs:417:33: overflow when adding duration to instant stack backtrace: 0: rust_begin_unwind 1: core::panicking::panic_fmt 2: core::option::expect_failed 3: <std::time::Instant as core::ops::arith::Add<core::time::Duration>>::add 4: tokio::time::interval::Interval::poll_tick 5: sc_network::protocol::notifications::tests::conformance::litep2p_disconnects_libp2p_substream::{{closure}} 6: tokio::runtime::scheduler::current_thread::Context::enter 7: tokio::runtime::context::scoped::Scoped<T>::set 8: tokio::runtime::scheduler::current_thread::CurrentThread::block_on 9: tokio::runtime::runtime::Runtime::block_on 10: sc_network::protocol::notifications::tests::conformance::litep2p_disconnects_libp2p_substream 11: core::ops::function::FnOnce::call_once ``` cc @paritytech/networking Signed-off-by:
Alexandru Vasile <alexandru.vasile@parity.io>
-
Oliver Tale-Yazdi authored
This can error when you use `cargo remote` and probably also with `cargo vendor`. Still seeing two more build errors, but at least this one is fixed. Other one: ```pre error: set `DATABASE_URL` to use query macros online, or run `cargo sqlx prepare` to update the query cache --> substrate/frame/revive/rpc/src/receipt_provider/db.rs:123:17 | 123 | let result = query!( | __________________________^ 124 | | r#" 125 | | INSERT OR REPLACE INTO transaction_hashes (transaction_hash, block_hash, transaction_index) 126 | | VALUES ($1, $2, $3) ... | 130 | | transaction_index 131 | | ) ``` and (maybe Rust version related, this is 1.84.1) ```pre error[E0282]: type annotations needed --> substrate/frame/revive/rpc/src/receipt_provider/db.rs:102:34 | 102 | let (tx_result, logs_result) = tokio::join!(delete_transaction_hashes, delete_logs); ...
-
- Mar 04, 2025
-
-
Xavier Lau authored
- Refactor to use the `frame` crate. - Use procedural macro version `construct_runtime` in mock. - Expose `PalletId` to `frame::pallet_prelude`. - Part of #6202. --- Polkadot address: 156HGo9setPcU2qhFMVWLkcmtCEGySLwNqa3DaEiYSWtte4Y --------- Signed-off-by:
Xavier Lau <x@acg.box> Co-authored-by:
GitHub Action <action@github.com> Co-authored-by:
Giuseppe Re <giuseppe.re@parity.io> Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
PG Herveou authored
In solidity `block.timestamp` should be expressed in seconds see https://docs.soliditylang.org/en/latest/units-and-global-variables.html#block-and-transaction-properties --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by:
Alexander Theißen <alex.theissen@me.com>
-
- Mar 03, 2025
-
-
polka.dom authored
When working with storage types that are to be set in the genesis block, deriving serde::Serialize & serde::Deserialize is necessary (to my knowledge). This PR introduces Serialize and Deserialize into the umbrella crate derive (and indirectly prelude) module, allowing for similar access as the other storage value derives. --------- Co-authored-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
clangenb authored
Last subtask from https://github.com/paritytech/polkadot-sdk/issues/5704. Closes #5704. The substrate-node is not 100% free of the native runtime yet, but the code has become less convoluted and better documented. The final cleanup needs https://github.com/paritytech/polkadot-sdk/issues/7748. --------- Co-authored-by:
Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com>
-
Alexander Theißen authored
Fixes https://github.com/paritytech/polkadot-sdk/issues/6157 This fixes the last remaining benchmark that was not correct since it was too low level to be written in Rust. Instead, we opted. This PR changes the benchmark that determines the scaling from `ref_time` to PolkaVM `Gas` by benchmarking the absolute worst case of an instruction: One that causes two cache misses by touching two cache lines. The Contract itself is designed to be as simple as possible. It does random unaligned reads in a loop until the `r` (repetition) number is reached. The randomness is fully generated by the host and written to the guests memory before the benchmark is run. This allows the benchmark to determine the influence of one loop iteration via linear regression. --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by:
xermicus <cyrill@parity.io> Co-authored-by:
PG Herveou <pgherveou@gmail.com>
-
Matteo Muraca authored
Description Part of #3326 As per title, `pallet::getter` usage has been removed from `pallet-nft-fractionalization`. --------- Co-authored-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
-
- Feb 28, 2025
-
-
Alexandru Vasile authored
This PR handles a case where we called the `poll_next` on an outbound substream notification to check if the stream is closed. It is entirely possible that the `poll_next` would return an `io::error`, for example end of file. This PR ensures that we make the distinction between unexpected incoming data, and error originated from `poll_next`. While at it, the bulk of the PR change propagates the PeerID from the network behavior, through the notification handler, to the notification outbound stream for logging purposes. cc @paritytech/networking Part of: https://github.com/paritytech/polkadot-sdk/issues/7722 --------- Signed-off-by:
Alexandru Vasile <alexandru.vasile@parity.io>
-
Alexander Theißen authored
Fixes https://github.com/paritytech/polkadot-sdk/issues/6723 ## Motivation Internal auditors recommended to not truncate Polkadot Addresses when deriving Ethereum addresses from it. Reasoning is that they are raw public keys where truncating could lead to collisions when weaknesses in those curves are discovered in the future. Additionally, some pallets generate account addresses in a way where only the suffix we were truncating contains any entropy. The changes in this PR act as a safe guard against those two points. ## Changes made We change the `to_address` function to first hash the AccountId32 and then use trailing 20 bytes as `AccountId20`. If the `AccountId32` ends with 12x 0xEE we keep our current behaviour of just truncating those trailing bytes. ## Security Discussion This will allow us to still recover the original `AccountId20` because those are constructed by just adding those 12 bytes. Please note that generating an ed25519 key pair where the trailing 12 bytes are 0xEE is theoretically possible as 96bits is not a huge search space. However, this cannot be used as an attack vector. It will merely allow this address to interact with `pallet_revive` without registering as the fallback account is the same as the actual address. The ultimate vanity address. In practice, this is not relevant since the 0xEE addresses are not valid public keys for sr25519 which is used almost everywhere. tl:dr: We keep truncating in case of an Ethereum address derived account id. This is safe as those are already derived via keccak. In every other case where we have to assume that the account id might be a public key. Therefore we first hash and then take the trailing bytes. ## Do we need a Migration for Westend No. We changed the name of the mapping. This means the runtime will not try to read the old data. Ethereum keys are unaffected by this change. We just advise people to re-register their AccountId32 in case they need to use it as it is a very small circle of users (just 3 addresses registered). This will not cause disturbance on Westend. --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
Serban Iorga authored
Related to https://github.com/paritytech/polkadot-sdk/issues/7360 This PR adds `DecodeWithMemTracking` as a trait bound for `Header`, `Block` and `TransactionExtension` and derives it for all the types that implement these traits in `polkadot-sdk`.
-
- Feb 27, 2025
-
-
Utkarsh Bhardwaj authored
# Description * This PR adds a new extrinsic `poke_deposit` to `pallet-multisig`. This extrinsic will be used to re-adjust the deposits made in the pallet to create a multisig operation after AHM. * Part of #5591 ## Review Notes * Added a new extrinsic `poke_deposit` in `pallet-multisig`. * Added a new event `DepositPoked` to be emitted upon a successful call of the extrinsic. * Although the immediate use of the extrinsic will be to give back some of the deposit after the AH-migration, the extrinsic is written such that it can work if the deposit decreases or increases (both). * The call to the extrinsic would be `free` if an actual adjustment is made to the deposit and `paid` otherwise. * Added tests to test all scenarios. ## TO-DOs * [x] Add Benchmark * [x] Run CI cmd bot to benchmark --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by:
Giuseppe Re <giuseppe.re@parity.io>
-
Alexandru Vasile authored
This PR ensures compatibility in terms of expectations between the libp2p and litep2p network backends at the notification protocol level. The libp2p node is tested with the `Notification` behavior that contains the protocol controller, while litep2p is tested at the lowest level API (without substrate shim layers). ## Notification Behavior (I) Libp2p protocol controller will eagerly reopen a closed substream, even if it is the one that closed it: - When a node (libp2p or litep2p) closes the substream with **libp2p**, the **libp2p** controller will reopen the substream - When **libp2p** closes the substream with a node (either litep2p with no controller or libp2p), the **libp2p** controller will reopen the substream - However in this case, libp2p was the one closing the substream signaling it is no longer interested in communicating with the other side (II) Notifications are lost and not reported to the higher level in the following scenario: - T0: Node A opens a substream with Node B - T1: Node A closes the substream or the connection with Node B - T2: Node B sends a notification to Node A => *notification is lost* and never reported - T3: Node B detects the closed substream or connection ## Testing This PR effectively checks: - connectivity at the notification level - litep2p rejecting libp2p substream and keep-alive mechanism functionality - libp2p disconnecting libp2p and connection re-establishment (and all the other permutations) - idling of connections with active substreams and keep-alive mechanism is not enforced Prior work: - https://github.com/paritytech/polkadot-sdk/pull/7361 cc @paritytech/networking --------- Signed-off-by:
Alexandru Vasile <alexandru.vasile@parity.io> Co-authored-by:
Dmitry Markin <dmitry@markin.tech>
-
- Feb 26, 2025
-
-
Ankan authored
closes https://github.com/paritytech/polkadot-sdk/issues/5742 Need to be backported to stable2503 release. With the migration of staking accounts to [fungible currency](https://github.com/paritytech/polkadot-sdk/pull/5501), we can now allow pool users to stake directly and vice versa. This update introduces a configurable filter mechanism to determine which accounts can join a nomination pool. ## Example Usage ### 1. Allow any account to join a pool To permit all accounts to join a nomination pool, use the `Nothing` filter: ```rust impl pallet_nomination_pools::Config for Runtime { ... type Filter = Nothing; } ``` ### 2. Restrict direct stakers from joining a pool To prevent direct stakers from joining a nomination pool, use `pallet_staking::AllStakers`: ```rust impl pallet_nomination_pools::Config for Runtime { ... type Filter = pallet_staking::AllStakers<Runtime>; } ``` ### 3. Define a custom filter For more granular control, you can define a custom filter: ```rust struct MyCustomFilter<T: Config>(core::marker::PhantomData<T>); impl<T: Config> Contains<T::AccountId> for MyCustomFilter<T> { fn contains(account: &T::AccountId) -> bool { todo!("Implement custom logic. Return `false` to allow the account to join a pool.") } } ``` --------- Co-authored-by:
Bastian Köcher <info@kchr.de>
-