- Dec 14, 2024
-
-
Jarkko Sakkinen authored
Bump `polkavm` to 0.18.0, and update `sc-polkavm-executor` to be compatible with the API changes. In addition, bump also `polkavm-derive` and `polkavm-linker` in order to make sure that the all parts of the Polkadot SDK use the exact same ABI for `.polkavm` binaries. Purely relying on RV32E/RV64E ABI is not possible, as PolkaVM uses a RISCV-V alike ISA, which is derived from RV32E/RV64E but it is still its own microarchitecture, i.e. not fully binary compatible. --------- Signed-off-by:
Jarkko Sakkinen <jarkko@parity.io> Co-authored-by:
Koute <koute@users.noreply.github.com> Co-authored-by:
Alexander Theißen <alex.theissen@me.com>
-
- Dec 13, 2024
-
-
davidk-pt authored
Follow up refactor to https://github.com/paritytech/polkadot-sdk/pull/6844#pullrequestreview-2497225717 I still need to finish adding `#[cfg(feature = "unstable-api")]` to the rest of the tests and make sure all tests pass, I want to make sure I'm moving into right direction first @athei @xermicus --------- Co-authored-by:
DavidK <davidk@parity.io> Co-authored-by:
Alexander Theißen <alex.theissen@me.com>
-
Shawn Tabrizi authored
Many problems can occur when building and testing a Parachain caused by misconfiguring the paraid. This can happen when there are 3 different places you need to update! This PR makes it so a SINGLE location is the source of truth for the ParaId.
-
Bastian Köcher authored
0.1.2 was yanked as it was breaking semver. --------- Co-authored-by: command-bot <>
-
Alexandru Gheorghe authored
Approval voting canonicalize is off by one that means if we are finalizing blocks one by one, approval-voting cleans it up every other block for example: - With 1, 2, 3, 4, 5, 6 blocks created, the stored range would be StoredBlockRange(1,7) - When block 3 is finalized the canonicalize works and StoredBlockRange is (4,7) - When block 4 is finalized the canonicalize exists early because of the `if range.0 > canon_number` break clause, so blocks are not cleaned up. - When block 5 is finalized the canonicalize works and StoredBlockRange becomes (6,7) and both block 4 and 5 are cleaned up. The consequences of this is that sometimes we keep block entries around after they are finalized, so at restart we consider this blocks and send them to approval-distribution. In most cases this is not a problem, but in the case when finality is lagging on restart approval-distribution will receive 4 as being the oldest block it needs to work on, and since BlockFinalized is never resent for block 4 after restart it won't get the opportunity to clean that up. Therefore it will end running approval-distribution aggression on block 4, because that is the oldest block it received from approval-voting for which it did not see a BlockFinalized signal. --------- Signed-off-by:
Alexandru Gheorghe <alexandru.gheorghe@parity.io>
-
Bastian Köcher authored
The `SlotBasedBlockImport` job is to collect the storage proofs of all blocks getting imported. These storage proofs alongside the block are being forwarded to the collation task. Right now they are just being thrown away. More logic will follow later. Basically this will be required to include multiple blocks into one `PoV` which will then be done by the collation task. --------- Co-authored-by:
Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com> Co-authored-by:
GitHub Action <action@github.com>
-
Dmitry Markin authored
Expose the Kademlia content providers API for the use by `sc-network` client code: 1. Extend the `NetworkDHTProvider` trait with functions to start/stop providing content and query the DHT for the list of content providers for a given key. 2. Extend the `DhtEvent` enum with events reporting the found providers or query failures. 3. Implement the above for libp2p & litep2p network backends. --------- Co-authored-by:
GitHub Action <action@github.com> Co-authored-by:
Alexandru Vasile <60601340+lexnv@users.noreply.github.com>
-
Niklas Adolfsson authored
This PR changes that the server builder is created once and shared/cloned for each connection to avoid some extra overhead to construct this for each connection (as it was before). I don't know why I constructed a new builder for each connection because it's not needed but shouldn't make a big difference to my understanding. --------- Co-authored-by: command-bot <>
-
Cyrill Leutwiler authored
This PR adds an API method to query the contract call data input size. Part of #6770 --------- Signed-off-by:
Cyrill Leutwiler <bigcyrill@hotmail.com> Co-authored-by: command-bot <> Co-authored-by:
Alexander Theißen <alex.theissen@me.com>
-
Alexander Theißen authored
Starting with Rust 1.82 `PanicInfo` is deprecated and will throw warnings when used. The new type is available since Rust 1.81 and should be available on our CI. --------- Co-authored-by: command-bot <>
-
Tsvetomir Dimitrov authored
Related to https://github.com/paritytech/polkadot-sdk/issues/1797 # The problem When fetching collations in collator protocol/validator side we need to ensure that each parachain has got a fair core time share depending on its assignments in the claim queue. This means that the number of collations fetched per parachain should ideally be equal to (but definitely not bigger than) the number of claims for the particular parachain in the claim queue. # Why the current implementation is not good enough The current implementation doesn't guarantee such fairness. For each relay parent there is a `waiting_queue` (PerRelayParent -> Collations -> waiting_queue) which holds any unfetched collations advertised to the validator. The collations are fetched on first in first out principle which means that if two parachains share a core and one of the parachains is more aggressive it might starve the second parachain. How? At each relay parent up to `max_candidate_depth` candidates ...
-
- Dec 12, 2024
-
-
clangenb authored
[polkadot-runtime-parachains] migrate paras module to benchmarking v2 syntax Part of: * #6202 --------- Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Dónal Murray authored
Fix the broker pallet auto-renew benchmarks which have been broken since #4424, yielding `Weightless` due to some prices being set too low, as reported in #6474. Upon further investigation it turned out that the auto-renew contribution to `rotate_sale` was always failing but the error was mapped. This is also fixed at the cost of a bit of setup overhead. Fixes #6474 TODO: - [x] Re-run weights --------- Co-authored-by:
GitHub Action <action@github.com> Co-authored-by: command-bot <> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Bastian Köcher authored
Co-authored-by:
GitHub Action <action@github.com> Co-authored-by:
Branislav Kontur <bkontur@gmail.com> Co-authored-by: command-bot <>
-
Alexandru Vasile authored
## [0.8.4] - 2024-12-12 This release aims to make the MDNS component more robust by fixing a bug that caused the MDNS service to fail to register opened substreams. Additionally, the release includes several improvements to the `identify` protocol, replacing `FuturesUnordered` with `FuturesStream` for better performance. ### Fixed - mdns/fix: Failed to register opened substream ([#301](https://github.com/paritytech/litep2p/pull/301)) ### Changed - identify: Replace FuturesUnordered with FuturesStream ([#302](https://github.com/paritytech/litep2p/pull/302)) - chore: Update hickory-resolver to version 0.24.2 ([#304](https://github.com/paritytech/litep2p/pull/304)) - ci: Ensure cargo-machete is working with rust version from CI ([#303](https://github.com/paritytech/litep2p/pull/303)) cc @paritytech/networking --------- Signed-off-by:
Alexandru Vasile <alexandru.vasile@parity.io>
-
davidk-pt authored
Resolves https://github.com/paritytech/polkadot-sdk/issues/6720 List of used host functions in PolkaVM recompiler is here https://github.com/paritytech/revive/blob/main/crates/runtime-api/src/polkavm_imports.c#L65 --------- Co-authored-by:
DavidK <davidk@parity.io>
-
Kazunobu Ndong authored
# Description Issue #6476 Collation-generation is not needed for validators node, and should be removed. ## Implementation Use a `DummySubsystem` for `collation_generation` --------- Co-authored-by:
Bastian Köcher <git@kchr.de> Co-authored-by: command-bot <> Co-authored-by:
Dmitry Markin <dmitry@markin.tech> Co-authored-by:
Alexandru Vasile <60601340+lexnv@users.noreply.github.com>
-
Iulian Barbu authored
# Description Get runtime's metadata, parse it and verify pallets list for a pallet named `ParachainSystem` (for now), and block number to be the same for both node and runtime. Ideally we'll add other pallets checks too, at least a small set of pallets we think right away as mandatory for parachain compatibility. Closes: #5565 ## Integration Runtime devs must be made aware that to be fully compatible with Omni Node, certain naming conventions should be respected when defining pallets (e.g we verify parachain-system pallet existence by searching for a pallet with `name` `ParachainSystem` in runtime's metadata). Not finding such a pallet will not influence the functionality yet, but by doing these checks we could provide useful feedback for runtimes that are clearly not implementing what's required for full parachain compatibility with Omni Node. ## Review Notes - [x] parachain system check - [x] check frame_system's metadata to ensure the block number in there is the same as the one in the node side - [x] add tests for the previous checking logic - [x] update omni node polkadot-sdk docs to make these conventions visible. - [ ] add more pallets checks? --------- Signed-off-by:
Iulian Barbu <iulian.barbu@parity.io> Co-authored-by:
Alexandru Vasile <60601340+lexnv@users.noreply.github.com> Co-authored-by:
Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com>
-
Cyrill Leutwiler authored
This PR implements the call data load API akin to [how it works on ethereum](https://www.evm.codes/?fork=cancun#35). There will also be a second PR to adjust the input function to resemble the call data copy opcode on EVM. --------- Signed-off-by:
Cyrill Leutwiler <bigcyrill@hotmail.com> Co-authored-by: command-bot <>
-
Iulian Barbu authored
# Description Upgrades parity-publish with fixes for `Error: no dep` and panic triggered when running `plan` and there is a new unpublished member crate introduced in the workspace. ## Integration N/A ## Review Notes Context: https://github.com/paritytech/polkadot-sdk/pull/6450#issuecomment-2537108971 Signed-off-by:
Iulian Barbu <iulian.barbu@parity.io>
-
- Dec 11, 2024
-
-
Alexander Theißen authored
Previously, we failed at runtime if an unknown or unstable host function was called. This requires us to keep track of when a host function was added and when a code was deployed. We used the `api_version` to track at which API version each code was deployed. This made sure that when a new host function was added that old code won't have access to it. This is necessary as otherwise the behavior of a contract that made calls to this previously non existent host function would change from "trap" to "do something". In this PR we remove the API version. Instead, we statically verify on upload that no non-existent host function is ever used in the code. This will allow us to add new host function later without needing to keep track when they were added. This simplifies the code and also gives an immediate feedback if unknown host functions are used. --------- Co-authored-by:
GitHub Action <action@github.com>
-
Francisco Aguirre authored
`InitiateTransfer`, the new instruction introduced in XCMv5, allows preserving the origin after a cross-chain transfer via the usage of the `AliasOrigin` instruction. The receiving chain needs to be configured to allow such this instruction to have its intended effect and not just throw an error. In this PR, I add the alias rules specified in the [RFC for origin preservation](https://github.com/polkadot-fellows/RFCs/blob/main/text/0122-alias-origin-on-asset-transfers.md) to westend chains so we can test these scenarios in the testnet. The new scenarios include: - Sending a cross-chain transfer from one system chain to another and doing a Transact on the same message (1 hop) - Sending a reserve asset transfer from one chain to another going through asset hub and doing Transact on the same message (2 hops) The updated chains are: - Relay: added `AliasChildLocation` - Collectives: added `AliasChildLocation` and `AliasOriginRootUsingFilter<AssetHubLocation, Everything>` - People: added `AliasChildLocation` and `AliasOriginRootUsingFilter<AssetHubLocation, Everything>` - Coretime: added `AliasChildLocation` and `AliasOriginRootUsingFilter<AssetHubLocation, Everything>` AssetHub already has `AliasChildLocation` and doesn't need the other config item. BridgeHub is not intended to be used by end users so I didn't add any config item. Only added `AliasChildOrigin` to the relay since we intend for it to be used less. --------- Co-authored-by:
GitHub Action <action@github.com> Co-authored-by: command-bot <>
-
Alexander Theißen authored
I am planning to use `alloy_core` to implement precompile support in `pallet_revive`. I noticed that it is already used by snowbridge. In order to unify the dependencies I did the following: 1. Switch to the `alloy-core` umbrella crate so that we have less individual dependencies to update. 2. Bump the latest version and fixup the resulting compile errors.
-
Alexandru Gheorghe authored
After finality started lagging on kusama around `2025-11-25 15:55:40` nodes started being overloaded with messages and some restarted with ``` Subsystem approval-distribution-subsystem appears unresponsive when sending a message of type polkadot_node_subsystem_types::messages::ApprovalDistributionMessage. origin=polkadot_service::relay_chain_selection::SelectRelayChainInner<sc_client_db::Backend<sp_runtime::generic::block::Block<sp_runtime::generic::header::Header<u32, sp_runtime::traits::BlakeTwo256>, sp_runtime::OpaqueExtrinsic>>, polkadot_overseer::Handle> ``` I think this happened because our aggression in the current form is way too spammy and create problems in situation where we already constructed blocks with a load of candidates to check which what happened around `#25933682` before and after. However aggression, does help in the nightmare scenario where the network is segmented and sparsely connected, so I tend to think we shouldn't completely remove it. The current configuration is: ``` l1_threshold: Some(16), l2_threshold: Some(28), resend_unfinalized_period: Some(8), ``` The way aggression works right now : 1. After L1 is triggered all nodes send all messages they created to all the other nodes and all messages they would have they already send according to the topology. 2. Because of resend_unfinalized_period for each block all messages at step 1) are sent every 8 blocks, so for example let's say we have blocks 1 to 24 unfinalized, then at block 25, all messages for block 1, 9 will be resent, and consequently at block 26, all messages for block 2, 10 will be resent, this becomes worse as more blocks are created if backing backpressure did not kick in yet. In total this logic makes that each node receive 3 * total_number_of messages_per_block 3. L2 aggression is way too spammy, when L2 aggression is enabled all nodes sends all messages of a block on GridXY, that means that all messages are received and sent by node at least 2*sqrt(num_validators), so on kusama would be 66 * NUM_MESSAGES_AT_FIRST_UNFINALIZED_BLOCK, so even with a reasonable number of messages like 10K, which you can have if you escalated because of no shows, you end-up sending and receiving ~660k messages at once, I think that's what makes the approval-distribution to appear unresponsive on some nodes. 4. Duplicate messages are received by the nodes which turn, mark the node as banned, which may create more no-shows. ## Proposed improvements: 1. Make L2 trigger way later 28 blocks, instead of 64, this should literally the last resort, until then we should try to let the approval-voting escalation mechanism to do its things and cover the no-shows. 2. On L1 aggression don't send messages for blocks too far from the first_unfinalized there is no point in sending the messages for block 20, if block 1 is still unfinalized. 3. On L1 aggression, send messages then back-off for 3 * resend_unfinalized_period to give time for everyone to clear up their queues. 4. If aggression is enabled accept duplicate messages from validators and don't punish them by reducting their reputation which, which may create no-shows. --------- Signed-off-by:
Alexandru Gheorghe <alexandru.gheorghe@parity.io> Co-authored-by:
Andrei Sandu <54316454+sandreim@users.noreply.github.com>
-
Ludovic_Domingues authored
# Description Migrated polkadot-runtime-common auctions benchmarking to the new benchmarking syntax v2. This is part of #6202 --------- Co-authored-by:
Giuseppe Re <giuseppe.re@parity.io> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
PG Herveou authored
Add tests for #6608 fix https://github.com/paritytech/contract-issues/issues/12 --------- Co-authored-by: command-bot <>
-
- Dec 10, 2024
-
-
Iulian Barbu authored
# Description This PR changes a few things: * `--dev` flag will not conflict with `--chain` anymore, but if `--chain` is not given will set `--chain=dev`. * `--dev-block-time` is optional and it defaults to 3000ms if not set after setting `--dev`. * to start OmniNode with manual seal it is enough to pass just `--dev`. * `--dev-block-time` can still be used to start a node with manual seal, but it will not set it up as `--dev` does (it will not set a bunch of flags which are enabled by default when `--dev` is set: e.g. `--tmp`, `--alice` and `--force-authoring`. Closes: #6537 ## Integration Relevant for node/runtime developers that use OmniNode lib, including `polkadot-omni-node` binary, although the recommended way for runtime development is to use `chopsticks`. ## Review Notes * Decided to focus only on OmniNode & templates docs in relation to it, and leave the `parachain-template-node` as is (meaning `--dev` isn't usable and te...
-
Ron authored
## Description Our smoke tests transfer `WETH` from Sepolia to Westend-AssetHub breaks, try to reregister `WETH` on AH but fails as following: https://bridgehub-westend.subscan.io/xcm_message/westend-4796d6b3600aca32ef63b9953acf6a456cfd2fbe https://assethub-westend.subscan.io/extrinsic/9731267-0?event=9731267-2 The reason is that the transact call encoded on BH to register the asset https://github.com/paritytech/polkadot-sdk/blob/a77940ba/bridges/snowbridge/primitives/router/src/inbound/mod.rs#L282-L289 ``` 0x3500020209079edaa8020300fff9976782d46cc05630d1f6ebab18b2324d6b1400ce796ae65569a670d0c1cc1ac12515a3ce21b5fbf729d63d7b289baad070139d01000000000000000000000000000000 ``` the `asset_id` which is the xcm location can't be decoded on AH in V5 Issue initial post in https://matrix.to/#/!qUtSTcfMJzBdPmpFKa:parity.io/$RNMAxIIOKGtBAqkgwiFuQf4eNaYpmOK-Pfw4d6vv1aU?via=parity.io&via=matrix.org&via=web3.foundation --------- Co-au...
-
Alexandru Gheorghe authored
The way we build the messages we need to send to approval-distribution can result in a situation where is we have multiple assignments covered by a coalesced approval, the messages are sent in this order: ASSIGNMENT1, APPROVAL, ASSIGNMENT2, because we iterate over each candidate and add to the queue of messages both the assignment and the approval for that candidate, and when the approval reaches the approval-distribution subsystem it won't be imported and gossiped because one of the assignment for it is not known. So in a network where a lot of nodes are restarting at the same time we could end up in a situation where a set of the nodes correctly received the assignments and approvals before the restart and approve their blocks and don't trigger their assignments. The other set of nodes should receive the assignments and approvals after the restart, but because the approvals never get broacasted anymore because of this bug, the only way they could approve is if other nodes start broadcasting their assignments. I think this bug contribute to the reason the network did not recovered on `25-11-25 15:55:40` after the restarts. Tested this scenario with a `zombienet` where `nodes` are finalising blocks because of aggression and all nodes are restarted at once and confirmed the network lags and doesn't recover before and it does after the fix --------- Signed-off-by:
Alexandru Gheorghe <alexandru.gheorghe@parity.io>
-
Kazunobu Ndong authored
# Description **Understood assignment:** Initial assignment description is in #6194. In order to Simplify the display of commands and ensure they are tested for chain spec builder's `polkadot-sdk` reference docs, find every occurrence of `#[docify::export]` where `process:Command` is used, and replace the use of `process:Command` by `run_cmd!` from the `cmd_lib crate`. --------- Co-authored-by:
Iulian Barbu <14218860+iulianbarbu@users.noreply.github.com>
-
Maksym H authored
Fixes: https://github.com/paritytech/ci_cd/issues/1079 Improvements: - switch to github native token creation action - refresh branch in CI from HEAD, to prevent failure - add APP token when pushing, to allow CI to be retriggering by bot
-
Joseph Zhao authored
Close: #5858 --------- Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Branislav Kontur authored
Closes: https://github.com/paritytech/polkadot-sdk/issues/5551 ## Description With [permissionless lanes PR#4949](https://github.com/paritytech/polkadot-sdk/pull/4949), the congestion mechanism based on sending `Transact(report_bridge_status(is_congested))` from `pallet-xcm-bridge-hub` to `pallet-xcm-bridge-hub-router` was replaced with a congestion mechanism that relied on monitoring XCMP queues. However, this approach could cause issues, such as suspending the entire XCMP queue instead of isolating the affected bridge. This PR reverts back to using `report_bridge_status` as before. ## TODO - [x] benchmarks - [x] prdoc ## Follow-up https://github.com/paritytech/polkadot-sdk/pull/6231 --------- Co-authored-by:
GitHub Action <action@github.com> Co-authored-by: command-bot <> Co-authored-by:
Adrian Catangiu <adrian@parity.io>
-
Branislav Kontur authored
Co-authored-by:
Francisco Aguirre <franciscoaguirreperez@gmail.com>
-
Francisco Aguirre authored
We removed the `require_weight_at_most` field and later changed it to `fallback_max_weight`. This was to have a fallback when sending a message to v4 chains, which happens in the small time window when chains are upgrading. We originally put no fallback for a message in snowbridge's inbound queue but we should have one. This PR adds it. --------- Co-authored-by:
GitHub Action <action@github.com>
-
- Dec 09, 2024
-
-
Adrian Catangiu authored
# Description Sending XCM messages to other chains requires paying a "transport fee". This can be paid either: - from `origin` local account if `jit_withdraw = true`, - taken from Holding register otherwise. This currently works for following hops/scenarios: 1. On destination no transport fee needed (only sending costs, not receiving), 2. Local/originating chain: just set JIT=true and fee will be paid from signed account, 3. Intermediary hops - only if intermediary is acting as reserve between two untrusted chains (aka only for `DepositReserveAsset` instruction) - this was fixed in https://github.com/paritytech/polkadot-sdk/pull/3142 But now we're seeing more complex asset transfers that are mixing reserve transfers with teleports depending on the involved chains. # Example E.g. transferring DOT between Relay and parachain, but through AH (using AH instead of the Relay chain as parachain's DOT reserve). In the `Parachain --1--> AssetHub --2--> Relay` scenario, DOT has to be reserve-withdrawn in leg `1`, then teleported in leg `2`. On the intermediary hop (AssetHub), `InitiateTeleport` fails to send onward message because of missing transport fees. We also can't rely on `jit_withdraw` because the original origin is lost on the way, and even if it weren't we can't rely on the user having funded accounts on each hop along the way. # Solution/Changes - Charge the transport fee in the executor from the transferred assets (if available), - Only charge from transferred assets if JIT_WITHDRAW was not set, - Only charge from transferred assets if unless using XCMv5 `PayFees` where we do not have this problem. # Testing Added regression tests in emulated transfers. Fixes https://github.com/paritytech/polkadot-sdk/issues/4832 Fixes https://github.com/paritytech/polkadot-sdk/issues/6637 --------- Signed-off-by:
Adrian Catangiu <adrian@parity.io> Co-authored-by:
Francisco Aguirre <franciscoaguirreperez@gmail.com>
-
Alexander Theißen authored
The dependency on `pallet_balances` doesn't seem to be necessary. At least everything compiles for me without it. Removed this dependency and a few others that seem to be left overs. --------- Co-authored-by:
GitHub Action <action@github.com>
-
Alexandru Gheorghe authored
After finality started lagging on kusama around 025-11-25 15:55:40 validators started seeing ocassionally this log, when importing votes covering more than one assignment. ``` Possible bug: Vote import failed ``` That happens because the assumption that assignments from the same validator would have the same required routing doesn't hold after you enabled aggression, so you might end up receiving the first assignment then you modify the routing for it in `enable_aggression` then your receive the second assignment and the vote covering both assignments, so the rouing for the first and second assingment wouldn't match and we would fail to import the vote. From the logs I've seen, I don't think this is the reason the network didn't fully recover until the failsafe kicked it, because the votes had been already imported in approval-voting before this error. --------- Signed-off-by:
Alexandru Gheorghe <alexandru.gheorghe@parity.io>
-
Maksym H authored
- change bench to default to old CLI - fix profile to production --------- Co-authored-by:
GitHub Action <action@github.com> Co-authored-by: command-bot <>
-
Egor_P authored
Set back the token for the cmd_bot in the backport flow so that it work again, till the new set up will be figured out with the sec team
-