- Mar 07, 2025
-
-
Michal Kucharczyk authored
#### PR Description This pull request introduces measures to handle finality stalls by : - notifying outdated transactions with a [`FinalityTimeout`](https://github.com/paritytech/polkadot-sdk/blob/d821c84d/substrate/client/transaction-pool/api/src/lib.rs#L145-L147) event. - removing outdated views from the `view_store` An item is considered _outdated_ when the difference between its associated block and the current block exceeds a pre-defined threshold. #### Note for Reviewers The core logic is provided in the following small commits: - `ViewStore`: new method [`finality_stall_view_cleanup`](https://github.com/paritytech/polkadot-sdk/blob/d821c84d/substrate/client/transaction-pool/src/fork_aware_txpool/view_store.rs#L869-L903) for removing stale views was added: 64267000 - `ForkAwareTransactionPool`: core logic for tracking finality stalls added here: 7b37ea6f. Entry point in [`finality_stall_cleanup`](https://github.com/paritytech/polkadot-sdk/blob/d821c84d/substrate/client/transaction-pool/src/fork_aware_txpool/fork_aware_txpool.rs#L1096-L1136) - Some related renaming was made to better reflect purpose/shorten the names: 1a3a1284, a511601f. Also new method [`transactions_finality_timeout`](https://github.com/paritytech/polkadot-sdk/blob/a511601f/substrate/client/transaction-pool/src/fork_aware_txpool/multi_view_listener.rs#L771-L790) for triggering external events was added for `MultiViewListener`. - `included_transactions` which basically is mapping `block hash -> included transactions hashes`, is also used to find to included transactions. I also sneaked in some minor improvements: - fixed per-transaction logging: 1572f721 - `handle_pre_finalized` method was removed, it was some old leftover which is no longer needed: a6f84ad0 , closes: #5482 --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by:
Sebastian Kunert <skunert49@gmail.com> Co-authored-by:
Iulian Barbu <14218860+iulianbarbu@users.noreply.github.com>
-
Iulian Barbu authored
# Description Builds up towards addressing #5497 by creating some zombienet-sdk code infra that can be used to spin regular networks, as described in the fork aware transaction pool testing setup added here #7100. It will be used for developing tests against such networks, and to also spawn them on demand locally through tooling that will be developed in follow ups. ## Integration Node/runtime developers can run tests based on the zombienet-sdk infra that spins frequently used networks which can be used for analyzing behavior of various node related components, like fork aware transaction pool. ## Review Notes - Uses ttxt API implemented here: https://github.com/michalkucharczyk/tx-test-tool/pull/22/files - currently, only two test scenarios are considered: 10k future & 10k ready txs are sent to two separate networks - one parachain and one relaychain, asserting at the end on the finalization of all 20k txs on both networks. --------- Signed-off-by:
Iulian Barbu <iulian.barbu@parity.io> Co-authored-by:
Javier Viola <363911+pepoviola@users.noreply.github.com> Co-authored-by:
Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com>
-
Bastian Köcher authored
This pr ensures that we remove the `authorization` for a runtime upgrade if the version check failed. If that check is failing, it means that the runtime upgrade is invalid and the check will never succeed. Besides that the pr is doing some clean ups. --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
- Mar 06, 2025
-
-
thiolliere authored
If the inner transaction extension used inside `SkipCheckIfFeeless` are multiples then the metadata is not correct, it is now fixed. E.g. if the transaction extension is `SkipCheckIfFeeless::<Runtime, (Payment1, Payment2)>` then the metadata was wrong. --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Utkarsh Bhardwaj authored
# Description * This PR adds a new extrinsic `poke_deposit` to `pallet-proxy`. This extrinsic will be used to re-adjust the deposits made in the pallet to create a proxy or to create an announcement. * Part of #5591 ## Review Notes * Added a new extrinsic `poke_deposit` in `pallet-proxy`. * This extrinsic checks and adjusts the deposits made for either creating a proxy or creating an announcement or both. * Added a new event `DepositPoked` to be emitted upon a successful call of the extrinsic. * Although the immediate use of the extrinsic will be to give back some of the deposit after the AH-migration, the extrinsic is written such that it can work if the deposit decreases or increases (both). * The call to the extrinsic would be `free` if an actual adjustment is made to the deposit for creating a proxy or to the deposit for creating an announcement or both and `paid` otherwise (when no deposit is changed). * Added a new enum `DepositKind` to differen...
-
Alexander Samusev authored
More details [here](https://github.com/paritytech/security/issues/103#issuecomment-2682362253) cc https://github.com/paritytech/security/issues/103
-
seemantaggarwal authored
Follow up from #7619 Adding a more verbose rustdoc for guidance and difference between chain-spec-builder and build-spec commands under polkadot-omni-node --------- Co-authored-by:
Iulian Barbu <14218860+iulianbarbu@users.noreply.github.com> Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
Andrii authored
[XCM] Add generic location to account converter that also works with external ecosystems for bridge hubs (#7809) That's the continuation of [PR#7313](https://github.com/paritytech/polkadot-sdk/pull/7313)
-
clangenb authored
I discovered in https://github.com/paritytech/polkadot-sdk/pull/7459, that the overhead benchmark is not working for glutton-westend, as the client can't send `system.remark` extrinsics. This was due to 2 issues: 1. Alice was not set as sudo. Hence, the `CheckOnlySudoAccount` deemed the extrinsic as invalid. 2. The `CheckNonce` TxExtension also marked the extrinsic as invalid, as the account doesn't exist (because glutton has no balances pallet). This PR fixes the 1.) for now. I wanted to simply remove the `CheckNonce` in the TxExtension to fix 2., but it turns out that this is not possible, as the tx-pool needs the nonce tag to identify the transaction. https://github.com/paritytech/polkadot-sdk/pull/6884 will fix sending extrinsics on glutton. --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by:
Bastian Köcher <git@kchr.de> Co-authored-by:
s0me0ne-unkn0wn <48632512+s0me0ne-unkn0wn@user...>
-
Dónal Murray authored
Asset Hub Next has been deployed on Westend as parachain 1100, but it's not yet a trusted teleporter. This minimal PR adds it in stable2412 so that it can be deployed right away without waiting for the rest of the release to be finalised and deployed. --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
Egor_P authored
This PR is an improvement of the Create Release Draft workflow. Due to the switch to the direct use of `gh` to upload artefacts to the release draft, behaviour of it has slightly changed. `gh` renames only a display name of the file but not the file it self. This pr adds renaming before the upload so that displayed name of the attachment and the file itself have the same name that includes `spec_version` in it. Closes: https://github.com/paritytech/release-engineering/issues/250 cc: @BulatSaif
-
Raymond Cheung authored
This PR enhances **`test_log_capture`**, ensuring logs are **captured for assertions** and **printed to the console** during test execution. ## **Motivation** - Partially addresses #6119 and #6125, to improves developer **tracing and debugging** in XCM-related tests. - Builds on #7594, improving **log visibility** while maintaining test **log capture capabilities**. - While writing tests for #7234, I noticed this function was missing. This PR adds it to streamline log handling in unit tests. ## **Changes** - Ensures logs up to `TRACE` level are **captured** (for assertions) and **printed** (for visibility). - Refines documentation to clearly specify **when to use** each function. - **Removes ANSI escape codes** from captured logs to ensure clean, readable assertions. ## **When to Use?** | Usage | Captures Logs? | Prints Logs? | Example | |----------------------------------------------|-----------------|--------------|---...
-
Andrei Sandu authored
Fixes https://github.com/paritytech/polkadot-sdk/issues/7664 The discussion in https://github.com/paritytech/polkadot-sdk/pull/7710 should provide the context for why this solution was picked. --------- Signed-off-by:
Andrei Sandu <andrei-mihail@parity.io> Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
PG Herveou authored
Small tweaks to the eth-rpc-tester bin --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
- Mar 05, 2025
-
-
Bastian Köcher authored
Right now `pallet-scheduler` is not putting back postponed tasks into the agenda when the early weight check is failing. This pull request ensures that these tasks are put back into the agenda and are not just "lost". --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by:
Guillaume Thiolliere <gui.thiolliere@gmail.com> Co-authored-by:
Alexandre R. Baldé <alexandre.balde@parity.io>
-
Alexander Samusev authored
close https://github.com/paritytech/ci_cd/issues/1114
-
Nathaniel Bajo authored
I added a new `ExternalConsensusLocationsConverterFor` struct to handle external global consensus locations and their child locations. This struct extends the functionality of existing converters (`GlobalConsensusParachainConvertsFor` and `EthereumLocationsConverterFor`) while maintaining backward compatibility. Fixes https://github.com/paritytech/polkadot-sdk/issues/7129 Polkadot address: 121HJWZtD13GJQPD82oEj3gSeHqsRYm1mFgRALu4L96kfPD1 --------- Co-authored-by:
Adrian Catangiu <adrian@parity.io> Co-authored-by:
ndk <ndk@parity.io>
-
Oliver Tale-Yazdi authored
Make some more stuff public that will be useful for AHM to reduce code duplication. --------- Signed-off-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
Dmitry Markin authored
Allow adding extra request-response protocols during polkadot service initialization. This is required to add a request-response protocol described in [RFC-0008](https://polkadot-fellows.github.io/RFCs/approved/0008-parachain-bootnodes-dht.html) to the relay chain side of the parachain node. ### Review notes The PR might look scary due to a lot of code being moved. It is easier to review it on a per-commit basis. The commits do not containing changes to the code logic are named accordingly. --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
Alexandru Vasile authored
The tokio instant timer produced an overflow when the `poll_tick` was called because the timer period was set to `u64::max`. The period is reduced to accommodate for the following tokio time addition: Source code [tokio/time/interval.rs](https://github.com/tokio-rs/tokio/blob/a2b12bd5799f06e912b32ac05a5ffb5cf1fe31cd/tokio/src/time/interval.rs#L478-L485): ```rust let next = if now > timeout + Duration::from_millis(5) { self.missed_tick_behavior .next_timeout(timeout, now, self.period) } else { timeout .checked_add(self.period) .unwrap_or_else(Instant::far_future) }; ``` Detected by: https://github.com/paritytech/polkadot-sdk/actions/runs/13648141251/job/38150825582?pr=7790 ``` ──── TRY 1 STDERR: sc-network protocol::notifications::tests::conformance::litep2p_disconnects_libp2p_substream thread 'protocol::no...
-
Oliver Tale-Yazdi authored
This can error when you use `cargo remote` and probably also with `cargo vendor`. Still seeing two more build errors, but at least this one is fixed. Other one: ```pre error: set `DATABASE_URL` to use query macros online, or run `cargo sqlx prepare` to update the query cache --> substrate/frame/revive/rpc/src/receipt_provider/db.rs:123:17 | 123 | let result = query!( | __________________________^ 124 | | r#" 125 | | INSERT OR REPLACE INTO transaction_hashes (transaction_hash, block_hash, transaction_index) 126 | | VALUES ($1, $2, $3) ... | 130 | | transaction_index 131 | | ) ``` and (maybe Rust version related, this is 1.84.1) ```pre error[E0282]: type annotations needed --> substrate/frame/revive/rpc/src/receipt_provider/db.rs:102:34 | 102 | let (tx_result, logs_result) = tokio::join!(delete_transaction_hashes, delete_logs); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cannot infer type | = note: this error originates in the macro `$crate::join` which comes from the expansion of the macro `tokio::join` (in Nightly builds, run with -Z macro-backtrace for more info) ``` --------- Signed-off-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
huntbounty authored
resolves #7491
-
- Mar 04, 2025
-
-
Xavier Lau authored
- Refactor to use the `frame` crate. - Use procedural macro version `construct_runtime` in mock. - Expose `PalletId` to `frame::pallet_prelude`. - Part of #6202. --- Polkadot address: 156HGo9setPcU2qhFMVWLkcmtCEGySLwNqa3DaEiYSWtte4Y --------- Signed-off-by:
Xavier Lau <x@acg.box> Co-authored-by:
GitHub Action <action@github.com> Co-authored-by:
Giuseppe Re <giuseppe.re@parity.io> Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
Ludovic_Domingues authored
# Description Migrating polkadot-runtime-parachains configuration benchmarking to the new benchmarking syntax v2. This is a part of #6202 --------- Co-authored-by:
Giuseppe Re <giuseppe.re@parity.io> Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by:
seemantaggarwal <32275622+seemantaggarwal@users.noreply.github.com>
-
PG Herveou authored
In solidity `block.timestamp` should be expressed in seconds see https://docs.soliditylang.org/en/latest/units-and-global-variables.html#block-and-transaction-properties --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by:
Alexander Theißen <alex.theissen@me.com>
-
- Mar 03, 2025
-
-
polka.dom authored
When working with storage types that are to be set in the genesis block, deriving serde::Serialize & serde::Deserialize is necessary (to my knowledge). This PR introduces Serialize and Deserialize into the umbrella crate derive (and indirectly prelude) module, allowing for similar access as the other storage value derives. --------- Co-authored-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
clangenb authored
Last subtask from https://github.com/paritytech/polkadot-sdk/issues/5704. Closes #5704. The substrate-node is not 100% free of the native runtime yet, but the code has become less convoluted and better documented. The final cleanup needs https://github.com/paritytech/polkadot-sdk/issues/7748. --------- Co-authored-by:
Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com>
-
Andrei Eres authored
# Description Working with https://github.com/paritytech/polkadot-sdk/pull/7556 I encountered an internal compiler error on polkadot-omni-node-lib: see bellow or [in CI](https://github.com/paritytech/polkadot-sdk/actions/runs/13521547633/job/37781894640). ``` error: internal compiler error: compiler/rustc_traits/src/codegen.rs:45:13: Encountered error `SignatureMismatch(SignatureMismatchData { found_trait_ref: <{closure@sp_state_machine::trie_backend_essence::TrieBackendEssence<sc_service::Arc<dyn sp_state_machine::trie_backend_essence::Storage<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>>, <<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing, sp_trie::cache::LocalTrieCache<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>, sp_trie::recorder::Recorder<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>>::storage::{closure#1}} as std::ops::FnOnce<(std::option::Option<&mut dyn trie_db::TrieRecorder<sp_core::H256>>, std::option::Option<&mut dyn trie_db::TrieCache<sp_trie::node_codec::NodeCodec<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>>>)>>, expected_trait_ref: <{closure@sp_state_machine::trie_backend_essence::TrieBackendEssence<sc_service::Arc<dyn sp_state_machine::trie_backend_essence::Storage<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>>, <<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing, sp_trie::cache::LocalTrieCache<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>, sp_trie::recorder::Recorder<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>>::storage::{closure#1}} as std::ops::FnOnce<(std::option::Option<&mut dyn trie_db::TrieRecorder<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hash>>, std::option::Option<&mut dyn trie_db::TrieCache<sp_trie::node_codec::NodeCodec<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>>>)>>, terr: Sorts(ExpectedFound { expected: Alias(Projection, AliasTy { args: [Alias(Projection, AliasTy { args: [Alias(Projection, AliasTy { args: [NodeSpec/#0], def_id: DefId(0:410 ~ polkadot_omni_node_lib[7cce]::common::spec::BaseNodeSpec::Block), .. })], def_id: DefId(0:507 ~ polkadot_omni_node_lib[7cce]::common::NodeBlock::BoundedHeader), .. })], def_id: DefId(229:1706 ~ sp_runtime[5da1]::traits::Header::Hash), .. }), found: sp_core::H256 }) })` selecting `<{closure@sp_state_machine::trie_backend_essence::TrieBackendEssence<sc_service::Arc<dyn sp_state_machine::trie_backend_essence::Storage<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>>, <<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing, sp_trie::cache::LocalTrieCache<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>, sp_trie::recorder::Recorder<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>>::storage::{closure#1}} as std::ops::FnOnce<(std::option::Option<&mut dyn trie_db::TrieRecorder<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hash>>, std::option::Option<&mut dyn trie_db::TrieCache<sp_trie::node_codec::NodeCodec<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>>>)>>` during codegen ``` Trying to parse the error I found that TrieRecorder was not supposed to work with H256: - Expected: `&mut dyn TrieRecorder<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as Header>::Hash>>` - Found: `&mut dyn TrieRecorder<sp_core::H256>>` The error happened because I added to `new_full_parts_with_genesis_builder` interaction with Trie Cache which eventually uses `TrieRecorder<H256>`. Here is the path: - In polkadot-omni-node-lib trait BaseNodeSpec defined with Block as `NodeBlock: BlockT<Hash = DbHash>`, where DbHash is H256. - BaseNodeSpec calls [new_full_parts_record_import::<Self::Block, … >](https://github.com/paritytech/polkadot-sdk/blob/75726c65/cumulus/polkadot-omni-node/lib/src/common/spec.rs#L184-L189) and eventually it goes to [new_full_parts_with_genesis_builder](https://github.com/paritytech/polkadot-sdk/blob/08b30246 /substrate/client/service/src/builder.rs#L195). - In `new_full_parts_with_genesis_builder` we accessed storage, initiating TrieRecorder with H256 what never happened before. I believe the compiler found a mismatch checking types for TrieRecorder: NodeBlock inherits from the trait `Block<Hash = DbHash>`, but it uses BoundedHeader, which inherits from the trait Header with the default Hash. --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
Alexander Theißen authored
Fixes https://github.com/paritytech/polkadot-sdk/issues/6157 This fixes the last remaining benchmark that was not correct since it was too low level to be written in Rust. Instead, we opted. This PR changes the benchmark that determines the scaling from `ref_time` to PolkaVM `Gas` by benchmarking the absolute worst case of an instruction: One that causes two cache misses by touching two cache lines. The Contract itself is designed to be as simple as possible. It does random unaligned reads in a loop until the `r` (repetition) number is reached. The randomness is fully generated by the host and written to the guests memory before the benchmark is run. This allows the benchmark to determine the influence of one loop iteration via linear regression. --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by:
xermicus <cyrill@parity.io> Co-authored-by:
PG Herveou <pgherveou@gmail.com>
-
Matteo Muraca authored
Description Part of #3326 As per title, `pallet::getter` usage has been removed from `pallet-nft-fractionalization`. --------- Co-authored-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
-
- Feb 28, 2025
-
-
Alexandru Vasile authored
This PR handles a case where we called the `poll_next` on an outbound substream notification to check if the stream is closed. It is entirely possible that the `poll_next` would return an `io::error`, for example end of file. This PR ensures that we make the distinction between unexpected incoming data, and error originated from `poll_next`. While at it, the bulk of the PR change propagates the PeerID from the network behavior, through the notification handler, to the notification outbound stream for logging purposes. cc @paritytech/networking Part of: https://github.com/paritytech/polkadot-sdk/issues/7722 --------- Signed-off-by:
Alexandru Vasile <alexandru.vasile@parity.io>
-
Sebastian Kunert authored
Follow-up of https://github.com/paritytech/polkadot-sdk/pull/7638, which attempted to remove contracts-rococo. But there were some leftover weight files still chilling in the repo.
-
Alexander Theißen authored
Fixes https://github.com/paritytech/polkadot-sdk/issues/6723 ## Motivation Internal auditors recommended to not truncate Polkadot Addresses when deriving Ethereum addresses from it. Reasoning is that they are raw public keys where truncating could lead to collisions when weaknesses in those curves are discovered in the future. Additionally, some pallets generate account addresses in a way where only the suffix we were truncating contains any entropy. The changes in this PR act as a safe guard against those two points. ## Changes made We change the `to_address` function to first hash the AccountId32 and then use trailing 20 bytes as `AccountId20`. If the `AccountId32` ends with 12x 0xEE we keep our current behaviour of just truncating those trailing bytes. ## Security Discussion This will allow us to still recover the original `AccountId20` because those are constructed by just adding those 12 bytes. Please note that generating an ed25519 key pair where the trailing 12 bytes are 0xEE is theoretically possible as 96bits is not a huge search space. However, this cannot be used as an attack vector. It will merely allow this address to interact with `pallet_revive` without registering as the fallback account is the same as the actual address. The ultimate vanity address. In practice, this is not relevant since the 0xEE addresses are not valid public keys for sr25519 which is used almost everywhere. tl:dr: We keep truncating in case of an Ethereum address derived account id. This is safe as those are already derived via keccak. In every other case where we have to assume that the account id might be a public key. Therefore we first hash and then take the trailing bytes. ## Do we need a Migration for Westend No. We changed the name of the mapping. This means the runtime will not try to read the old data. Ethereum keys are unaffected by this change. We just advise people to re-register their AccountId32 in case they need to use it as it is a very small circle of users (just 3 addresses registered). This will not cause disturbance on Westend. --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
-
Egor_P authored
This PR updates `pgpgkms`, tool to sign releases, to the latest version to fix the issue wiht the `debian` publishing form the pipeline. Address: https://github.com/paritytech/release-engineering/issues/248
-
Serban Iorga authored
Related to https://github.com/paritytech/polkadot-sdk/issues/7360 This PR adds `DecodeWithMemTracking` as a trait bound for `Header`, `Block` and `TransactionExtension` and derives it for all the types that implement these traits in `polkadot-sdk`.
-
- Feb 27, 2025
-
-
Raymond Cheung authored
A follow-up PR to simplify event assertions by introducing `contains_event`, allowing event checks without needing exact field matches. This reduces redundancy and makes tests more flexible. Partially addresses #6119 by providing an alternative way to assert events. Reference: [PR #7594 - Discussion](https://github.com/paritytech/polkadot-sdk/pull/7594#discussion_r1965566349) --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by:
Branislav Kontur <bkontur@gmail.com> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Utkarsh Bhardwaj authored
# Description * This PR adds a new extrinsic `poke_deposit` to `pallet-multisig`. This extrinsic will be used to re-adjust the deposits made in the pallet to create a multisig operation after AHM. * Part of #5591 ## Review Notes * Added a new extrinsic `poke_deposit` in `pallet-multisig`. * Added a new event `DepositPoked` to be emitted upon a successful call of the extrinsic. * Although the immediate use of the extrinsic will be to give back some of the deposit after the AH-migration, the extrinsic is written such that it can work if the deposit decreases or increases (both). * The call to the extrinsic would be `free` if an actual adjustment is made to the deposit and `paid` otherwise. * Added tests to test all scenarios. ## TO-DOs * [x] Add Benchmark * [x] Run CI cmd bot to benchmark --------- Co-authored-by:
cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by:
Giuseppe Re <giuseppe.re@parity.io>
-
huntbounty authored
Resolves #7536 --------- Signed-off-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
-
Egor_P authored
This PR backports version bumps and prdocs reorg from the latest stable branch back to master
-
Alexandru Vasile authored
This PR ensures compatibility in terms of expectations between the libp2p and litep2p network backends at the notification protocol level. The libp2p node is tested with the `Notification` behavior that contains the protocol controller, while litep2p is tested at the lowest level API (without substrate shim layers). ## Notification Behavior (I) Libp2p protocol controller will eagerly reopen a closed substream, even if it is the one that closed it: - When a node (libp2p or litep2p) closes the substream with **libp2p**, the **libp2p** controller will reopen the substream - When **libp2p** closes the substream with a node (either litep2p with no controller or libp2p), the **libp2p** controller will reopen the substream - However in this case, libp2p was the one closing the substream signaling it is no longer interested in communicating with the other side (II) Notifications are lost and not reported to the higher level in the following scenario: - T0: Node A opens a substream with Node B - T1: Node A closes the substream or the connection with Node B - T2: Node B sends a notification to Node A => *notification is lost* and never reported - T3: Node B detects the closed substream or connection ## Testing This PR effectively checks: - connectivity at the notification level - litep2p rejecting libp2p substream and keep-alive mechanism functionality - libp2p disconnecting libp2p and connection re-establishment (and all the other permutations) - idling of connections with active substreams and keep-alive mechanism is not enforced Prior work: - https://github.com/paritytech/polkadot-sdk/pull/7361 cc @paritytech/networking --------- Signed-off-by:
Alexandru Vasile <alexandru.vasile@parity.io> Co-authored-by:
Dmitry Markin <dmitry@markin.tech>
-