- Apr 01, 2024
-
-
Alexandru Gheorghe authored
Runtime release 1.2 includes bumping of the ParachainHost APIs up to v10, so let's move all the released APIs out of vstaging folder, this PR does not include any logic changes only renaming of the modules and some moving around. Signed-off-by: Alexandru Gheorghe <[email protected]>
-
Alexandru Gheorghe authored
Test started failing after https://github.com/paritytech/polkadot-sdk/commit/66051adb which enabled approval coalescing, that was expected to happen because the test required an polkadot_parachain_approval_checking_finality_lag of 0, which can't happen with max_approval_coalesce_count greater than 1 because we always delay the approval for no_show_duration_ticks/2 in case we can coalesce it with other approvals. So relax a bit the restrictions, since we don't actually care that the lags are 0, but the fact the finalities are progressing and are not stuck. Signed-off-by: Alexandru Gheorghe <[email protected]>
-
Andrei Sandu authored
fixes https://github.com/paritytech/polkadot-sdk/issues/3886 --------- Signed-off-by: Andrei Sandu <[email protected]>
-
Alexandru Gheorghe authored
The metric records the current protocol_version of the validator that just connected with the peer_map.len(), which contains all peers that connected, that has the effect the metric will be wrong since it won't tell us how many peers we have connected per version because it will always record the total number of peers Fix this by counting by version inside peer_map, additionally because that might be a bit heavier than len(), publish it only on-active leaves. --------- Signed-off-by: Alexandru Gheorghe <[email protected]>
-
- Mar 31, 2024
-
-
Bastian Köcher authored
Closes: https://github.com/paritytech/polkadot-sdk/issues/3906
-
- Mar 29, 2024
-
-
s0me0ne-unkn0wn authored
Removes transient code introduced to clean up offchain database after `im-online` pallet removal. Should be merged after #2290 has been enacted.
-
Andrei Sandu authored
Somehow https://github.com/paritytech/polkadot-sdk/pull/3795 was merged but tests are failing now on master. I suspect that CI is not even running these tests anymore which is a big issue. --------- Signed-off-by: Andrei Sandu <[email protected]>
-
- Mar 28, 2024
-
-
tugy authored
# Description Since the binary split additional syscalls are getting blocked in relation to the workers. With the hardened systemd file it shows the following warning: ``` Cannot fully enable landlock, a Linux kernel security feature. Running validation of malicious PVF code has a higher risk of compromising this machine. Consider upgrading the kernel version for maximum security. status=Ok(NotEnforced) abi=1 ``` For it to work we need to allow additionally: - mount - umount2 - pivot_root and set `RestrictNamespaces=false` Added new line `SystemCallFilter=pivot_root` because otherwise it would get blocked by ~\@\privileged Co-authored-by: s0me0ne-unkn0wn <[email protected]> Co-authored-by: Bastian Köcher <[email protected]>
-
Alin Dima authored
-
- Mar 27, 2024
-
-
Gonçalo Pestana authored
This PR adds a new extrinsic `Call::restore_ledger ` gated by `StakingAdmin` origin that restores a corrupted staking ledger. This extrinsic will be used to recover ledgers that were affected by the issue discussed in https://github.com/paritytech/polkadot-sdk/issues/3245. The extrinsic will re-write the storage items associated with a stash account provided as input parameter. The data used to reset the ledger can be either i) fetched on-chain or ii) partially/totally set by the input parameters of the call. In order to use on-chain data to restore the staking locks, we need a way to read the current lock in the balances pallet. This PR adds a `InspectLockableCurrency` trait and implements it in the pallet balances. An alternative would be to tightly couple staking with the pallet balances but that's inelegant (an example of how it would look like in [this branch](https://github.com/paritytech/polkadot-sdk/tree/gpestana/ledger-badstate-clean_tightly)). More details on the type of corruptions and corresponding fixes https://hackmd.io/DLb5jEYWSmmvqXC9ae4yRg?view#/ We verified that the `Call::restore_ledger` does fix all current corrupted ledgers in Polkadot and Kusama. You can verify it here https://hackmd.io/v-XNrEoGRpe7APR-EZGhOA. **Changes introduced** - Adds `Call::restore_ledger ` extrinsic to recover a corrupted ledger; - Adds trait `frame_support::traits::currency::InspectLockableCurrency` to allow external pallets to read current locks given an account and lock ID; - Implements the `InspectLockableCurrency` in the pallet-balances. - Adds staking locks try-runtime checks (https://github.com/paritytech/polkadot-sdk/issues/3751) **Todo** - [x] benchmark `Call::restore_ledger` - [x] throughout testing of all ledger recovering cases - [x] consider adding the staking locks try-runtime checks to this PR (https://github.com/paritytech/polkadot-sdk/issues/3751) - [x] simulate restoring all ledgers (https://hackmd.io/Dsa2tvhISNSs7zcqriTaxQ?view) in Polkadot and Kusama using chopsticks -- https://hackmd.io/v-XNrEoGRpe7APR-EZGhOA Related to https://github.com/paritytech/polkadot-sdk/issues/3245 Closes https://github.com/paritytech/polkadot-sdk/issues/3751 --------- Co-authored-by: command-bot <>
-
Ermal Kaleci authored
This will make it possible to use remaining weight on idle for processing enqueued messages. More context here https://github.com/paritytech/polkadot-sdk/issues/3709 --------- Co-authored-by: Adrian Catangiu <[email protected]>
-
Andrei Sandu authored
This works only for collators that implement the `collator_fn` allowing `collation-generation` subsystem to pull collations triggered on new heads. Also enables `request_v2::CollationFetchingResponse::CollationWithParentHeadData` for test adder/undying collators. TODO: - [x] fix tests - [x] new tests - [x] PR doc --------- Signed-off-by: Andrei Sandu <[email protected]>
-
Francisco Aguirre authored
`execute` and `send` try to decode the xcm in the parameters before reaching the filter line. The new extrinsics decode only after the filter line. These should be used instead of the old ones. ## TODO - [x] Tests - [x] Generate weights - [x] Deprecation issue -> https://github.com/paritytech/polkadot-sdk/issues/3771 - [x] PRDoc - [x] Handle error in pallet-contracts This would make writing XCMs in PJS Apps more difficult, but here's the fix for that: https://github.com/polkadot-js/apps/pull/10350. Already deployed! https://polkadot.js.org/apps/#/utilities/xcm Supersedes https://github.com/paritytech/polkadot-sdk/pull/1798/ --------- Co-authored-by: PG Herveou <[email protected]> Co-authored-by: command-bot <> Co-authored-by: Adrian Catangiu <[email protected]>
-
Andrei Sandu authored
This is a long due chore ... --------- Signed-off-by: Andrei Sandu <[email protected]> Co-authored-by: ordian <[email protected]>
-
- Mar 26, 2024
-
-
ordian authored
Fixes #3826. The docs on the `candidates` field of `BlockEntry` were incorrectly stating that they are sorted by core index. The (incorrect) optimization was introduced in #3747 based on this assumption. The actual ordering is based on `CandidateIncluded` events ordering in the runtime. We revert this optimization here. - [x] verify the underlying issue - [x] add a regression test --------- Co-authored-by: Bastian Köcher <[email protected]>
-
Pavel Orlov authored
The PR provides API for obtaining: - the weight required to execute an XCM message, - a list of acceptable `AssetId`s for message execution payment, - the cost of the weight in the specified acceptable `AssetId`. It is meant to address an issue where one has to guess how much fee to pay for execution. Also, at the moment, a client has to guess which assets are acceptable for fee execution payment. See the related issue https://github.com/paritytech/polkadot-sdk/issues/690. With this API, a client is supposed to query the list of the supported asset IDs (in the XCM version format the client understands), weigh the XCM program the client wants to execute and convert the weight into one of the acceptable assets. Note that the client is supposed to know what program will be executed on what chains. However, having a small companion JS library for the pallet-xcm and xtokens should be enough to determine what XCM programs will be executed and where (since these pallets compose a known small set of programs). ```Rust pub trait XcmPaymentApi<Call> where Call: Codec, { /// Returns a list of acceptable payment assets. /// /// # Arguments /// /// * `xcm_version`: Version. fn query_acceptable_payment_assets(xcm_version: Version) -> Result<Vec<VersionedAssetId>, Error>; /// Returns a weight needed to execute a XCM. /// /// # Arguments /// /// * `message`: `VersionedXcm`. fn query_xcm_weight(message: VersionedXcm<Call>) -> Result<Weight, Error>; /// Converts a weight into a fee for the specified `AssetId`. /// /// # Arguments /// /// * `weight`: convertible `Weight`. /// * `asset`: `VersionedAssetId`. fn query_weight_to_asset_fee(weight: Weight, asset: VersionedAssetId) -> Result<u128, Error>; /// Get delivery fees for sending a specific `message` to a `destination`. /// These always come in a specific asset, defined by the chain. /// /// # Arguments /// * `message`: The message that'll be sent, necessary because most delivery fees are based on the /// size of the message. /// * `destination`: The destination to send the message to. Different destinations may use /// different senders that charge different fees. fn query_delivery_fees(destination: VersionedLocation, message: VersionedXcm<()>) -> Result<VersionedAssets, Error>; } ``` An [example](https://gist.github.com/PraetorP/4bc323ff85401abe253897ba990ec29d) of a client side code. --------- Co-authored-by: Francisco Aguirre <[email protected]> Co-authored-by: Adrian Catangiu <[email protected]> Co-authored-by: Daniel Shiposha <[email protected]>
-
Tsvetomir Dimitrov authored
This PR notifies broker pallet for any parachain slot swaps performed on the relay chain. This is achieved by registering an `OnSwap` for the the `coretime` pallet. The hook sends XCM message to the broker chain and invokes a new extrinsic `swap_leases` which updates `Leases` storage item (which keeps the legacy parachain leases). I made two assumptions in this PR: 1. [`Leases`](https://github.com/paritytech/polkadot-sdk/blob/4987d798/substrate/frame/broker/src/lib.rs#L120) in `broker` pallet and [`Leases`](https://github.com/paritytech/polkadot-sdk/blob/4987d798 /polkadot/runtime/common/src/slots/mod.rs#L118) in `slots` pallet are in sync. 2. `swap_leases` extrinsic from `broker` pallet can be triggered only by root or by the XCM message from the relay chain. If not - the extrinsic will generate an error and do nothing. As a side effect from the changes `OnSwap` trait is moved from runtime/common/traits.rs to runtime/parachains. Otherwise it is not accessible from `broker` pallet. Closes https://github.com/paritytech/polkadot-sdk/issues/3552 TODOs: - [x] Weights - [x] Tests --------- Co-authored-by: command-bot <> Co-authored-by: eskimor <[email protected]> Co-authored-by: Bastian Köcher <[email protected]>
-
Andrei Eres authored
Here we add the ability to save subsystem benchmark results in JSON format to display them as graphs To draw graphs, CI team will use [github-action-benchmark](https://github.com/benchmark-action/github-action-benchmark). Since we are using custom benchmarks, we need to prepare [a specific data type](https://github.com/benchmark-action/github-action-benchmark?tab=readme-ov-file#examples): ``` [ { "name": "CPU Load", "unit": "Percent", "value": 50 } ] ``` Then we'll get graphs like this: ![example](https://raw.githubusercontent.com/rhysd/ss/master/github-action-benchmark/main.png) [A live page with graphs](https://benchmark-action.github.io/github-action-benchmark/dev/bench/) --------- Co-authored-by: ordian <[email protected]>
-
Dcompoze authored
**Update:** Pushed additional changes based on the review comments. **This pull request fixes various spelling mistakes in this repository.** Most of the changes are contained in the first **3** commits: - `Fix spelling mistakes in comments and docs` - `Fix spelling mistakes in test names` - `Fix spelling mistakes in error messages, panic messages, logs and tracing` Other source code spelling mistakes are separated into individual commits for easier reviewing: - `Fix the spelling of 'authority'` - `Fix the spelling of 'REASONABLE_HEADERS_IN_JUSTIFICATION_ANCESTRY'` - `Fix the spelling of 'prev_enqueud_messages'` - `Fix the spelling of 'endpoint'` - `Fix the spelling of 'children'` - `Fix the spelling of 'PenpalSiblingSovereignAccount'` - `Fix the spelling of 'PenpalSudoAccount'` - `Fix the spelling of 'insufficient'` - `Fix the spelling of 'PalletXcmExtrinsicsBenchmark'` - `Fix the spelling of 'subtracted'` - `Fix the spelling of 'CandidatePendingAvailability'` - `Fix the spelling of 'exclusive'` - `Fix the spelling of 'until'` - `Fix the spelling of 'discriminator'` - `Fix the spelling of 'nonexistent'` - `Fix the spelling of 'subsystem'` - `Fix the spelling of 'indices'` - `Fix the spelling of 'committed'` - `Fix the spelling of 'topology'` - `Fix the spelling of 'response'` - `Fix the spelling of 'beneficiary'` - `Fix the spelling of 'formatted'` - `Fix the spelling of 'UNKNOWN_PROOF_REQUEST'` - `Fix the spelling of 'succeeded'` - `Fix the spelling of 'reopened'` - `Fix the spelling of 'proposer'` - `Fix the spelling of 'InstantiationNonce'` - `Fix the spelling of 'depositor'` - `Fix the spelling of 'expiration'` - `Fix the spelling of 'phantom'` - `Fix the spelling of 'AggregatedKeyValue'` - `Fix the spelling of 'randomness'` - `Fix the spelling of 'defendant'` - `Fix the spelling of 'AquaticMammal'` - `Fix the spelling of 'transactions'` - `Fix the spelling of 'PassingTracingSubscriber'` - `Fix the spelling of 'TxSignaturePayload'` - `Fix the spelling of 'versioning'` - `Fix the spelling of 'descendant'` - `Fix the spelling of 'overridden'` - `Fix the spelling of 'network'` Let me know if this structure is adequate. **Note:** The usage of the words `Merkle`, `Merkelize`, `Merklization`, `Merkelization`, `Merkleization`, is somewhat inconsistent but I left it as it is. ~~**Note:** In some places the term `Receival` is used to refer to message reception, IMO `Reception` is the correct word here, but I left it as it is.~~ ~~**Note:** In some places the term `Overlayed` is used instead of the more acceptable version `Overlaid` but I also left it as it is.~~ ~~**Note:** In some places the term `Applyable` is used instead of the correct version `Applicable` but I also left it as it is.~~ **Note:** Some usage of British vs American english e.g. `judgement` vs `judgment`, `initialise` vs `initialize`, `optimise` vs `optimize` etc. are both present in different places, but I suppose that's understandable given the number of contributors. ~~**Note:** There is a spelling mistake in `.github/CODEOWNERS` but it triggers errors in CI when I make changes to it, so I left it as it is.~~
-
Dcompoze authored
Fixes formatting for https://github.com/paritytech/polkadot-sdk/pull/3698
-
- Mar 25, 2024
-
-
Andrei Eres authored
Adds availability-write regression tests. The results for the `availability-distribution` subsystem are volatile, so I had to reduce the precision of the test.
-
Alin Dima authored
https://github.com/paritytech/polkadot-sdk/issues/3742
-
Serban Iorga authored
Related to https://github.com/paritytech/parity-bridges-common/issues/2538 This PR doesn't contain any functional changes. The PR moves specific bridged chain definitions from `bridges/primitives` to `bridges/chains` folder in order to facilitate the migration of the `parity-bridges-repo` into `polkadot-sdk` as discussed in https://hackmd.io/LprWjZ0bQXKpFeveYHIRXw?view Apart from this it also includes some cosmetic changes to some `Cargo.toml` files as a result of running `diener workspacify`.
-
- Mar 23, 2024
-
-
eskimor authored
Fixes https://github.com/paritytech/polkadot-sdk/issues/3762 . --------- Co-authored-by: eskimor <[email protected]> Co-authored-by: Adrian Catangiu <[email protected]>
-
- Mar 22, 2024
-
-
girazoki authored
Currently `transfer_assets` from pallet-xcm covers 4 main different transfer types: - `localReserve` - `DestinationReserve` - `Teleport` - `RemoteReserve` For the first three, the local execution and the remote message sending are separated, and fees are deducted in pallet-xcm itself: https://github.com/paritytech/polkadot-sdk/blob/3410dfb3 /polkadot/xcm/pallet-xcm/src/lib.rs#L1758. For the 4th case `RemoteReserve`, pallet-xcm is still relying on the xcm-executor itself to send the message (through the `initiateReserveWithdraw` instruction). In this case, if delivery fees need to be charged, it is not possible to do so because the `jit_withdraw` mode has not being set. This PR proposes to still use the `initiateReserveWithdraw` but prepending a `setFeesMode { jit_withdraw: true }` to make sure delivery fees can be paid. A test-case is also added to present the aforementioned case --------- Co-authored-by: Adrian Catangiu <[email protected]>
-
PG Herveou authored
We do not need to make these traits generic over QueryId type, we can just use the QueryId alias everywhere
-
Dmitry Markin authored
Make sure explicitly set by the operator public addresses go first in the authority discovery DHT records. Also update `Discovery` behavior to eliminate duplicates in the returned addresses. This PR should improve situation with https://github.com/paritytech/polkadot-sdk/issues/3519. Obsoletes https://github.com/paritytech/polkadot-sdk/pull/3657.
-
Will | Paradox | ParaNodes.io authored
Good day, I'm seeking to add the following bootnodes for Kusama and Polkadot's relay and system chains. The following commands can be used to test connectivity. All node keys are backed up. Polkadot: ``` polkadot --chain polkadot --base-path /tmp/node --name "Boot" --reserved-only --reserved-nodes "/dns/boot-polkadot.luckyfriday.io/tcp/443/wss/p2p/12D3KooWAdyiVAaeGdtBt6vn5zVetwA4z4qfm9Fi2QCSykN1wTBJ" --no-hardware-benchmarks ``` Assethub-Polkadot: ``` polkadot-parachain --chain asset-hub-polkadot --base-path /tmp/node --name "Boot" --reserved-only --reserved-nodes "/dns/boot-polkadot-assethub.luckyfriday.io/tcp/443/wss/p2p/12D3KooWDR9M7CjV1xdjCRbRwkFn1E7sjMaL4oYxGyDWxuLrFc2J" --no-hardware-benchmarks ``` Bridgehub-Polkadot: ``` polkadot-parachain --chain bridge-hub-polkadot --base-path /tmp/node --name "Boot" --reserved-only --reserved-nodes "/dns/boot-polkadot-bridgehub.luckyfriday.io/tcp/443/wss/p2p/12D3KooWKf3mBXHjLbwtPqv1BdbQuwbFNcQQYxASS7iQ25264AXH" --no-hardware-benchmarks ``` Collectives-Polkadot ``` polkadot-parachain --chain collectives-polkadot --base-path /tmp/node --name "Boot" --reserved-only --reserved-nodes "/dns/boot-polkadot-collectives.luckyfriday.io/tcp/443/wss/p2p/12D3KooWCzifnPooTt4kvTnXT7FTKTymVL7xn7DURQLsS2AKpf6w" --no-hardware-benchmarks ``` Kusama: ``` polkadot --chain kusama --base-path /tmp/node --name "Boot" --reserved-only --reserved-nodes "/dns/boot-kusama.luckyfriday.io/tcp/443/wss/p2p/12D3KooWS1Lu6DmK8YHSvkErpxpcXmk14vG6y4KVEFEkd9g62PP8" --no-hardware-benchmarks ``` Assethub-Kusama: ``` polkadot-parachain --chain asset-hub-kusama --base-path /tmp/node --name "Boot" --reserved-only --reserved-nodes "/dns/boot-kusama-assethub.luckyfriday.io/tcp/443/wss/p2p/12D3KooWSwaeFs6FNgpgh54fdoxSDAA4nJNaPE3PAcse2GRrG7b3" --no-hardware-benchmarks ``` Bridgehub-Kusama: ``` polkadot-parachain --chain bridge-hub-kusama --base-path /tmp/node --name "Boot" --reserved-only --reserved-nodes "/dns/boot-kusama-bridgehub.luckyfriday.io/tcp/443/wss/p2p/12D3KooWQybw6AFmAvrFfwUQnNxUpS12RovapD6oorh2mAJr4xyd" --no-hardware-benchmarks ``` Co-authored-by: Bastian Köcher <[email protected]>
-
- Mar 21, 2024
-
-
Alejandro Martinez Andres authored
Based on issue [#2512](https://github.com/paritytech/polkadot-sdk/issues/2512), it seems that some ecosystem teams are using these networks to set up their staging environments and test certain use cases, some of them involving sending XCMs from the relay with origins not allowed in the current configuration. This change reverts the configuration of `SendXcmOrigin`. --------- Co-authored-by: Adrian Catangiu <[email protected]>
-
ordian authored
Small refactoring to reduce the algorithmic complexity of the initial message distribution in approval voting after a sync from O(n_candidates ^ 2) to O(n_candidates).
-
Alin Dima authored
Changes needed to implement the runtime part of elastic scaling: https://github.com/paritytech/polkadot-sdk/issues/3131, https://github.com/paritytech/polkadot-sdk/issues/3132, https://github.com/paritytech/polkadot-sdk/issues/3202 Also fixes https://github.com/paritytech/polkadot-sdk/issues/3675 TODOs: - [x] storage migration - [x] optimise process_candidates from O(N^2) - [x] drop backable candidates which form cycles - [x] fix unit tests - [x] add more unit tests - [x] check the runtime APIs which use the pending availability storage. We need to expose all of them, see https://github.com/paritytech/polkadot-sdk/issues/3576 - [x] optimise the candidate selection. we're currently picking randomly until we satisfy the weight limit. we need to be smart about not breaking candidate chains while being fair to all paras - https://github.com/paritytech/polkadot-sdk/pull/3573 Relies on the changes made in https://github.com/paritytech/polkadot-sdk/pull/3233 in terms of the inclusion policy and the candidate ordering --------- Signed-off-by: alindima <[email protected]> Co-authored-by: command-bot <> Co-authored-by: eskimor <[email protected]>
-
Egor_P authored
This PR backports: - node version bump - `spec_vesion` bump - reordering of the `prdocs` to the appropriate folder from the `1.9.0` release branch
-
gupnik authored
Step in https://github.com/paritytech/polkadot-sdk/issues/3688
-
- Mar 20, 2024
-
-
eskimor authored
We witnessed really poor performance on Rococo, where we ended up with 50 on-demand cores. This was due to the fact that for each core the full queue was processed. With this change full queue processing will happen way less often (most of the time complexity is O(1) or O(log(n))) and if it happens then only for one core (in expectation). Also spot price is now updated before each order to ensure economic back pressure. TODO: - [x] Implement - [x] Basic tests - [x] Add more tests (see todos) - [x] Run benchmark to confirm better performance, first results suggest > 100x faster. - [x] Write migrations - [x] Bump scale-info version and remove patch in Cargo.toml - [x] Write PR docs: on-demand performance improved, more on-demand cores are now non problematic anymore. If need by also the max queue size can be increased again. (Maybe not to 10k) Optional: Performance can be improved even more, if we called `pop_assignment_for_core()`, before calling `report_processed` (Avoid needless affinity drops). The effect gets smaller the larger the claim queue and I would only go for it, if it does not add complexity to the scheduler. --------- Co-authored-by: eskimor <[email protected]> Co-authored-by: antonva <[email protected]> Co-authored-by: command-bot <> Co-authored-by: Anton Vilhelm Ásgeirsson <[email protected]> Co-authored-by: ordian <[email protected]>
-
Tsvetomir Dimitrov authored
The PR adds two things: 1. Runtime API exposing the whole claim queue 2. Consumes the API in `collation-generation` to fetch the next scheduled `ParaEntry` for an occupied core. Related to https://github.com/paritytech/polkadot-sdk/issues/1797
-
- Mar 19, 2024
-
-
Davide Galassi authored
Introduces `CryptoBytes` type defined as: ```rust pub struct CryptoBytes<const N: usize, Tag = ()>(pub [u8; N], PhantomData<fn() -> Tag>); ``` The type implements a bunch of methods and traits which are typically expected from a byte array newtype (NOTE: some of the methods and trait implementations IMO are a bit redundant, but I decided to maintain them all to not change too much stuff in this PR) It also introduces two (generic) typical consumers of `CryptoBytes`: `PublicBytes` and `SignatureBytes`. ```rust pub struct PublicTag; pub PublicBytes<const N: usize, CryptoTag> = CryptoBytes<N, (PublicTag, CryptoTag)>; pub struct SignatureTag; pub SignatureBytes<const N: usize, CryptoTag> = CryptoBytes<N, (SignatureTag, CryptoTag)>; ``` Both of them use a tag to differentiate the two types at a higher level. Downstream specializations will further specialize using a dedicated crypto tag. For example in ECDSA: ```rust pub struct EcdsaTag; pub type Public = PublicBytes<PUBLIC_KEY_SERIALIZED_SIZE, EcdsaTag>; pub type Signature = PublicBytes<PUBLIC_KEY_SERIALIZED_SIZE, EcdsaTag>; ``` Overall we have a cleaner and most importantly **consistent** code for all the types involved All these details are opaque to the end user which can use `Public` and `Signature` for the cryptos as before
-
ordian authored
On top of #3302. We want the validators to upgrade first before we add changes to the collation side to send the new variants, which is why this part is extracted into a separate PR. The detection of when to send the parent head is based on the core assignments at the relay parent of the candidate. We probably want to make it more flexible in the future, but for now, it will work for a simple use case when a para always has multiple cores assigned to it. --------- Signed-off-by: Matteo Muraca <[email protected]> Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: Matteo Muraca <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Juan Ignacio Rios <[email protected]> Co-authored-by: Branislav Kontur <[email protected]> Co-authored-by: Bastian Köcher <[email protected]>
-
Matteo Muraca authored
Part of #3326 cc @Kianenigma @ggwpez @liamaharon polkadot address: 12poSUQPtcF1HUPQGY3zZu2P8emuW9YnsPduA4XG3oCEfJVp --------- Signed-off-by: Matteo Muraca <[email protected]>
-
Bastian Köcher authored
Closes: https://github.com/paritytech/polkadot-sdk/issues/3704
-
Juan Ignacio Rios authored
Currently the xcm-executor returns an `Unimplemented` error if it receives any HRMP-related instruction. What I propose here, which is what we are currently doing in our forked executor at polimec, is to introduce a trait implemented by the executor which will handle those instructions. This way, if parachains want to keep the default behavior, they just use `()` and it will return unimplemented, but they can also implement their own logic to establish HRMP channels with other chains in an automated fashion, without requiring to go through governance. Our implementation is mentioned in the [polkadot HRMP docs](https://arc.net/l/quote/hduiivbu), and it was suggested to us to submit a PR to add these changes to polkadot-sdk. --------- Co-authored-by: Branislav Kontur <[email protected]> Co-authored-by: command-bot <>
-