Skip to content
Snippets Groups Projects
  1. May 31, 2024
  2. May 29, 2024
    • eskimor's avatar
      Broker new price adapter (#4521) · f4dc8d22
      eskimor authored
      
      Fixes #4360 
      
      Also rename: AllowedRenewals -> PotentialRenewals to avoid confusion of
      future readers. (An entry in `AllowedRenewals` is not enough to allow a
      renewal, the assignment also has to be complete, which is only checked
      afterwards.)
      
      - [x] Does not work with renewals as is - fix.
      - [x] More tests
      - [x] PR docs
      
      Edit 1:
      (Relevant blog post:
      https://grillapp.net/12935/agile-coretime-pricing-explained-166522?ref=29715)
      
      ---------
      
      Co-authored-by: default avatareskimor <eskimor@no-such-url.com>
      Co-authored-by: default avatarDónal Murray <donal.murray@parity.io>
      Co-authored-by: command-bot <>
      f4dc8d22
    • Francisco Aguirre's avatar
      Change `XcmDryRunApi::dry_run_extrinsic` to take a call instead (#4621) · d5053ac4
      Francisco Aguirre authored
      
      Follow-up to the new `XcmDryRunApi` runtime API introduced in
      https://github.com/paritytech/polkadot-sdk/pull/3872.
      
      Taking an extrinsic means the frontend has to sign first to dry-run and
      once again to submit.
      This is bad UX which is solved by taking an `origin` and a `call`.
      This also has the benefit of being able to dry-run as any account, since
      it needs no signature.
      
      This is a breaking change since I changed `dry_run_extrinsic` to
      `dry_run_call`, however, this API is still only on testnets.
      The crates are bumped accordingly.
      
      As a part of this PR, I changed the name of the API from `XcmDryRunApi`
      to just `DryRunApi`, since it can be used for general dry-running :)
      
      Step towards https://github.com/paritytech/polkadot-sdk/issues/690.
      
      Example of calling the API with PAPI, not the best code, just testing :)
      
      ```ts
      // We just build a call, the arguments make it look very big though.
      const call = localApi.tx.XcmPallet.transfer_assets({
        dest: XcmVersionedLocation.V4({ parents: 0, interior: XcmV4Junctions.X1(XcmV4Junction.Parachain(1000)) }),
        beneficiary: XcmVersionedLocation.V4({ parents: 0, interior: XcmV4Junctions.X1(XcmV4Junction.AccountId32({ network: undefined, id: Binary.fromBytes(encodeAccount(account.address)) })) }),
        weight_limit: XcmV3WeightLimit.Unlimited(),
        assets: XcmVersionedAssets.V4([{
          id: { parents: 0, interior: XcmV4Junctions.Here() },
          fun: XcmV3MultiassetFungibility.Fungible(1_000_000_000_000n) }
        ]),
        fee_asset_item: 0,
      });
      // We call the API passing in a signed origin 
      const result = await localApi.apis.XcmDryRunApi.dry_run_call(
        WestendRuntimeOriginCaller.system(DispatchRawOrigin.Signed(account.address)),
        call.decodedCall
      );
      if (result.success && result.value.execution_result.success) {
        // We find the forwarded XCM we want. The first one going to AssetHub in this case.
        const xcmsToAssetHub = result.value.forwarded_xcms.find(([location, _]) => (
          location.type === "V4" &&
            location.value.parents === 0 &&
            location.value.interior.type === "X1"
            && location.value.interior.value.type === "Parachain"
            && location.value.interior.value.value === 1000
        ))!;
      
        // We can even find the delivery fees for that forwarded XCM.
        const deliveryFeesQuery = await localApi.apis.XcmPaymentApi.query_delivery_fees(xcmsToAssetHub[0], xcmsToAssetHub[1][0]);
      
        if (deliveryFeesQuery.success) {
          const amount = deliveryFeesQuery.value.type === "V4" && deliveryFeesQuery.value.value[0].fun.type === "Fungible" && deliveryFeesQuery.value.value[0].fun.value.valueOf() || 0n;
          // We store them in state somewhere.
          setDeliveryFees(formatAmount(BigInt(amount)));
        }
      }
      ```
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      d5053ac4
    • gupnik's avatar
      Moves runtime macro out of experimental flag (#4249) · 5f68c930
      gupnik authored
      
      Step in https://github.com/paritytech/polkadot-sdk/issues/3688
      
      Now that the `runtime` macro (Construct Runtime V2) has been
      successfully deployed on Westend, this PR moves it out of the
      experimental feature flag and makes it generally available for runtime
      devs.
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      Co-authored-by: default avatarKian Paimani <5588131+kianenigma@users.noreply.github.com>
      5f68c930
  3. May 28, 2024
    • Bastian Köcher's avatar
      parachain-inherent: Make `para_id` more prominent (#4555) · 2b1c606a
      Bastian Köcher authored
      This should make it more obvious that at instantiation of the
      `MockValidationDataInherentDataProvider` the `para_id` needs to be
      passed.
      2b1c606a
    • Bolaji Ahmad's avatar
      Improve On_demand_assigner events (#4339) · 650b124f
      Bolaji Ahmad authored
      
      title: Improving `on_demand_assigner` emitted events
      
      doc:
        - audience: Rutime User
      description: OnDemandOrderPlaced event that is useful for indexers to
      save data related to on demand orders. Check [discussion
      here](https://substrate.stackexchange.com/questions/11366/ondemandassignmentprovider-ondemandorderplaced-event-was-removed/11389#11389).
      
      Closes #4254 
      
      crates: [ 'runtime-parachain]
      
      ---------
      
      Co-authored-by: default avatarMaciej <maciej.zyszkiewicz@parity.io>
      650b124f
    • Alin Dima's avatar
      Add availability-recovery from systematic chunks (#1644) · 523e6256
      Alin Dima authored
      **Don't look at the commit history, it's confusing, as this branch is
      based on another branch that was merged**
      
      Fixes #598 
      Also implements [RFC
      #47](https://github.com/polkadot-fellows/RFCs/pull/47)
      
      ## Description
      
      - Availability-recovery now first attempts to request the systematic
      chunks for large POVs (which are the first ~n/3 chunks, which can
      recover the full data without doing the costly reed-solomon decoding
      process). This has a fallback of recovering from all chunks, if for some
      reason the process fails. Additionally, backers are also used as a
      backup for requesting the systematic chunks if the assigned validator is
      not offering the chunk (each backer is only used for one systematic
      chunk, to not overload them).
      - Quite obviously, recovering from systematic chunks is much faster than
      recovering from regular chunks (4000% faster as measured on my apple M2
      Pro).
      - Introduces a `ValidatorIndex` -> `ChunkIndex` mapping which is
      different for ...
      523e6256
  4. May 27, 2024
    • Michal Kucharczyk's avatar
      `sc-chain-spec`: deprecated code removed (#4410) · 2d3a6932
      Michal Kucharczyk authored
      This PR removes deprecated code:
      - The `RuntimeGenesisConfig` generic type parameter in
      `GenericChainSpec` struct.
      - `ChainSpec::from_genesis` method allowing to create chain-spec using
      closure providing runtime genesis struct
      - `GenesisSource::Factory` variant together with no longer needed
      `GenesisSource`'s generic parameter `G` (which was intended to be a
      runtime genesis struct).
      
      
      https://github.com/paritytech/polkadot-sdk/blob/17b56fae/substrate/client/chain-spec/src/chain_spec.rs#L559-L563
      2d3a6932
    • Sebastian Kunert's avatar
      check-weight: Disable total pov size check for mandatory extrinsics (#4571) · 70dd67a5
      Sebastian Kunert authored
      So in some pallets we like
      [here](https://github.com/paritytech/polkadot-sdk/blob/5dc522d0/substrate/frame/session/src/lib.rs#L556)
      we use `max_block` as return value for `on_initialize` (ideally we would
      not).
      
      This means the block is already full when we try to apply the inherents,
      which lead to the error seen in #4559 because we are unable to include
      the required inherents. This was not erroring before #4326 because we
      were running into this branch:
      
      https://github.com/paritytech/polkadot-sdk/blob/e4b89cc5/substrate/frame/system/src/extensions/check_weight.rs#L222-L224
      
      The inherents are of `DispatchClass::Mandatory` and therefore have a
      `reserved` value of `None` in all runtimes I have inspected. So they
      will always pass the normal check.
      
      So in this PR I adjust the `check_combined_proof_size` to return an
      early `Ok(())` for mandatory extrinsics.
      
      If we agree on this PR I will backport it to the 1.12.0 branch.
      
      closes #4559
      
      ---------
      
      Co-authored-by: command-bot <>
      70dd67a5
    • omahs's avatar
      chore: fix typos (#4590) · 89b67bc6
      omahs authored
      chore: fix typos
      89b67bc6
    • Francisco Aguirre's avatar
      Deprecate XCMv2 (#4131) · 9201f9ab
      Francisco Aguirre authored
      
      Marked XCMv2 as deprecated now that we have XCMv4.
      It will be removed sometime around June 2024.
      
      ---------
      
      Co-authored-by: default avatarBranislav Kontur <bkontur@gmail.com>
      9201f9ab
  5. May 24, 2024
    • Oliver Tale-Yazdi's avatar
      Polkadot-SDK Umbrella Crate (#3935) · 1c7a1a58
      Oliver Tale-Yazdi authored
      # Umbrella Crate
      
      The Polkadot-SDK "umbrella" is a crate that re-exports all other
      published crates. This makes it
      possible to have a very small `Cargo.toml` file that only has one
      dependency, the umbrella
      crate. This helps with selecting the right combination of crate
      versions, since otherwise 3rd
      party tools are needed to select a compatible set of versions.
      
      ## Features
      
      The umbrella crate supports no-std builds and can therefore be used in
      the runtime and node.
      There are two main features: `runtime` and `node`. The `runtime` feature
      enables all `no-std`
      crates, while the `node` feature enables all `std` crates. It should be
      used like any other
      crate in the repo, with `default-features = false`.
      
      For more fine-grained control, additionally, each crate can be enabled
      selectively. The umbrella
      exposes one feature per dependency. For example, if you only want to use
      the `frame-support`
      crate, you can enable the `frame-support` feature.
      
      The umbrella exposes a few more g...
      1c7a1a58
  6. May 23, 2024
    • PG Herveou's avatar
      Contracts: Rework host fn benchmarks (#4233) · 493ba5e2
      PG Herveou authored
      
      fix https://github.com/paritytech/polkadot-sdk/issues/4163
      
      This PR does the following:
      Update to pallet-contracts-proc-macro: 
      - Parse #[cfg] so we can add a dummy noop host function for benchmark.
      - Generate BenchEnv::<host_fn> so we can call host functions directly in
      the benchmark.
      - Add the weight of the noop host function before calling the host
      function itself
      
      Update benchmarks:
      - Update all host function benchmark, a host function benchmark now
      simply call the host function, instead of invoking the function n times
      from within a contract.
      - Refactor RuntimeCosts & Schedule, for most host functions, we can now
      use the generated weight function directly instead of computing the diff
      with the cost! macro
      
      ```rust
      // Before
      #[benchmark(pov_mode = Measured)]
      fn seal_input(r: Linear<0, API_BENCHMARK_RUNS>) {
          let code = WasmModule::<T>::from(ModuleDefinition {
              memory: Some(ImportedMemory::max::<T>()),
              imported_functions: vec![ImportedFunction {
                  module: "seal0",
                  name: "seal_input",
                  params: vec![ValueType::I32, ValueType::I32],
                  return_type: None,
              }],
              data_segments: vec![DataSegment { offset: 0, value: 0u32.to_le_bytes().to_vec() }],
              call_body: Some(body::repeated(
                  r,
                  &[
                      Instruction::I32Const(4), // ptr where to store output
                      Instruction::I32Const(0), // ptr to length
                      Instruction::Call(0),
                  ],
              )),
              ..Default::default()
          });
      
          call_builder!(func, code);
      
          let res;
          #[block]
          {
              res = func.call();
          }
          assert_eq!(res.did_revert(), false);
      }
      ```
      
      ```rust
      // After
      fn seal_input(n: Linear<0, { code::max_pages::<T>() * 64 * 1024 - 4 }>) {
          let mut setup = CallSetup::<T>::default();
          let (mut ext, _) = setup.ext();
          let mut runtime = crate::wasm::Runtime::new(&mut ext, vec![42u8; n as usize]);
          let mut memory = memory!(n.to_le_bytes(), vec![0u8; n as usize],);
          let result;
          #[block]
          {
              result = BenchEnv::seal0_input(&mut runtime, &mut memory, 4, 0)
          }
          assert_ok!(result);
          assert_eq!(&memory[4..], &vec![42u8; n as usize]);
      }
      ``` 
      
      [Weights
      compare](https://weights.tasty.limo/compare?unit=weight&ignore_errors=true&threshold=10&method=asymptotic&repo=polkadot-sdk&old=master&new=pg%2Frework-host-benchs&path_pattern=substrate%2Fframe%2Fcontracts%2Fsrc%2Fweights.rs%2Cpolkadot%2Fruntime%2F*%2Fsrc%2Fweights%2F**%2F*.rs%2Cpolkadot%2Fbridges%2Fmodules%2F*%2Fsrc%2Fweights.rs%2Ccumulus%2F**%2Fweights%2F*.rs%2Ccumulus%2F**%2Fweights%2Fxcm%2F*.rs%2Ccumulus%2F**%2Fsrc%2Fweights.rs)
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarAlexander Theißen <alex.theissen@me.com>
      493ba5e2
  7. May 22, 2024
  8. May 21, 2024
    • Dmitry Markin's avatar
      Replace `Multiaddr` & related types with substrate-specific types (#4198) · d05786ff
      Dmitry Markin authored
      This PR introduces custom types / substrate wrappers for `Multiaddr`,
      `multiaddr::Protocol`, `Multihash`, `ed25519::*` and supplementary types
      like errors and iterators.
      
      This is needed to unblock `libp2p` upgrade PR
      https://github.com/paritytech/polkadot-sdk/pull/1631 after
      https://github.com/paritytech/polkadot-sdk/pull/2944 was merged.
      `libp2p` and `litep2p` currently depend on different versions of
      `multiaddr` crate, and introduction of this "common ground" types is
      needed to support independent version upgrades of `multiaddr` and
      dependent crates in `libp2p` & `litep2p`.
      
      While being just convenient to not tie versions of `libp2p` & `litep2p`
      dependencies together, it's currently not even possible to keep `libp2p`
      & `litep2p` dependencies updated to the same versions as `multiaddr` in
      `libp2p` depends on `libp2p-identity` that we can't include as a
      dependency of `litep2p`, which has it's own `PeerId` type. In the
      future, to keep things updated on `litep2p` side, we will likely need to
      fork `multiaddr` and make it use `litep2p` `PeerId` as a payload of
      `/p2p/...` protocol.
      
      With these changes, common code in substrate uses these custom types,
      and `litep2p` & `libp2p` backends use corresponding libraries types.
      d05786ff
    • Svyatoslav Nikolsky's avatar
      Bridge: added force_set_pallet_state call to pallet-bridge-grandpa (#4465) · e0e1f2d6
      Svyatoslav Nikolsky authored
      closes https://github.com/paritytech/parity-bridges-common/issues/2963
      
      See issue above for rationale
      I've been thinking about adding similar calls to other pallets, but:
      - for parachains pallet I haven't been able to think of a case when we
      will need that given how long referendum takes. I.e. if storage proof
      format changes and we want to unstuck the bridge, it'll take a large a
      few weeks to sync a single parachain header, then another weeks for
      another and etc.
      - for messages pallet I've made the similar call initially, but it just
      changes a storage key (`OutboundLanes` and/or `InboundLanes`), so
      there's no any logic here and it may be simply done using
      `system.set_storage`.
      
      ---------
      
      Co-authored-by: command-bot <>
      e0e1f2d6
    • Svyatoslav Nikolsky's avatar
      Fixed RPC subscriptions leak when subscription stream is finished (#4533) · d54feeb1
      Svyatoslav Nikolsky authored
      closes https://github.com/paritytech/parity-bridges-common/issues/3000
      
      Recently we've changed our bridge configuration for Rococo <> Westend
      and our new relayer has started to submit transactions every ~ `30`
      seconds. Eventually, it switches itself into limbo state, where it can't
      submit more transactions - all `author_submitAndWatchExtrinsic` calls
      are failing with the following error: `ERROR bridge Failed to send
      transaction to BridgeHubRococo node: Call(ErrorObject { code:
      ServerError(-32006), message: "Too many subscriptions on the
      connection", data: Some(RawValue("Exceeded max limit of 1024")) })`.
      
      Some links for those who want to explore:
      - server side (node) has a strict limit on a number of active
      subscriptions. It fails to open a new subscription if this limit is hit:
      https://github.com/paritytech/jsonrpsee/blob/a4533966b997e83632509ad97eea010fc7c3efc0/server/src/middleware/rpc/layer/rpc_service.rs#L122-L132.
      The limit is set to `1024` by default;
      - internally this limit is a semaphore with `limit` permits:
      https://github.com/paritytech/jsonrpsee/blob/a4533966b997e83632509ad97eea010fc7c3efc0/core/src/server/subscription.rs#L461-L485;
      - semaphore permit is acquired in the first link;
      - the permit is "returned" when the `SubscriptionSink` is dropped:
      https://github.com/paritytech/jsonrpsee/blob/a4533966b997e83632509ad97eea010fc7c3efc0/core/src/server/subscription.rs#L310-L325;
      - the `SubscriptionSink` is dropped when [this `polkadot-sdk`
      function](https://github.com/paritytech/polkadot-sdk/blob/278486f9/substrate/client/rpc/src/utils.rs#L58-L94)
      returns. In other words - when the connection is closed, the stream is
      finished or internal subscription buffer limit is hit;
      - the subscription has the internal buffer, so sending an item contains
      of two steps: [reading an item from the underlying
      stream](https://github.com/paritytech/polkadot-sdk/blob/278486f9/substrate/client/rpc/src/utils.rs#L125-L141)
      and [sending it over the
      connection](https://github.com/paritytech/polkadot-sdk/blob/278486f9/substrate/client/rpc/src/utils.rs#L111-L116);
      - when the underlying stream is finished, the `inner_pipe_from_stream`
      wants to ensure that all items are sent to the subscriber. So it: [waits
      until the current send operation
      completes](https://github.com/paritytech/polkadot-sdk/blob/278486f9/substrate/client/rpc/src/utils.rs#L146-L148)
      and then [send all remaining items from the internal
      buffer](https://github.com/paritytech/polkadot-sdk/blob/278486f9/substrate/client/rpc/src/utils.rs#L150-L155).
      Once it is done, the function returns, the `SubscriptionSink` is
      dropped, semaphore permit is dropped and we are ready to accept new
      subscriptions;
      - unfortunately, the code just calls the `pending_fut.await.is_err()` to
      ensure that [the current send operation
      completes](https://github.com/paritytech/polkadot-sdk/blob/278486f9/substrate/client/rpc/src/utils.rs#L146-L148).
      But if there are no current send operation (which is normal), then the
      `pending_fut` is set to terminated future and the `await` never
      completes. Hence, no return from the function, no drop of
      `SubscriptionSink`, no drop of semaphore permit, no new subscriptions
      allowed (once number of susbcriptions hits the limit.
      
      I've illustrated the issue with small test - you may ensure that if e.g.
      the stream is initially empty, the
      `subscription_is_dropped_when_stream_is_empty` will hang because
      `pipe_from_stream` never exits.
      d54feeb1
    • Alin Dima's avatar
      Remove the prospective-parachains subsystem from collators (#4471) · 278486f9
      Alin Dima authored
      Implements https://github.com/paritytech/polkadot-sdk/issues/4429
      
      Collators only need to maintain the implicit view for the paraid they
      are collating on.
      In this case, bypass prospective-parachains entirely. It's still useful
      to use the GetMinimumRelayParents message from prospective-parachains
      for validators, because the data is already present there.
      
      This enables us to entirely remove the subsystem from collators, which
      consumed resources needlessly
      
      Aims to resolve https://github.com/paritytech/polkadot-sdk/issues/4167 
      
      TODO:
      - [x] fix unit tests
      278486f9
  9. May 20, 2024
  10. May 17, 2024
    • PG Herveou's avatar
      a90d324d
    • Ankan's avatar
      Allow pool to be destroyed with an extra (erroneous) consumer reference on the pool account (#4503) · 2e36f571
      Ankan authored
      addresses https://github.com/paritytech/polkadot-sdk/issues/4440 (will
      close once we have this in prod runtimes).
      related: https://github.com/paritytech/polkadot-sdk/issues/2037.
      
      An extra consumer reference is preventing pools to be destroyed. When a
      pool is ready to be destroyed, we
      can safely clear the consumer references if any. Notably, I only check
      for one extra consumer reference since that is a known bug. Anything
      more indicates possibly another issue and we probably don't want to
      silently absorb those errors as well.
      
      After this change, pools with extra consumer reference should be able to
      destroy normally.
      2e36f571
  11. May 16, 2024
    • Jesse Chejieh's avatar
      Adds `MaxRank` Config in `pallet-core-fellowship` (#3393) · d5fe478e
      Jesse Chejieh authored
      
      resolves #3315
      
      ---------
      
      Co-authored-by: default avatardoordashcon <jessechejieh@doordashcon.local>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      d5fe478e
    • Clara van Staden's avatar
      Snowbridge - Ethereum Client - Reject finalized updates without a sync... · 943eb46e
      Clara van Staden authored
      Snowbridge - Ethereum Client - Reject finalized updates without a sync committee in next store period (#4478)
      
      While syncing Ethereum consensus updates to the Snowbridge Ethereum
      light client, the syncing process stalled due to error
      `InvalidSyncCommitteeUpdate` when importing the next sync committee for
      period `1087`.
      
      This bug manifested specifically because our light client checkpoint is
      a few weeks old (submitted to governance weeks ago) and had to catchup
      until a recent block. Since then, we have done thorough testing of the
      catchup sync process.
      
      ### Symptoms
      - Import next sync committee for period `1086` (essentially period
      `1087`). Light client store period = `1086`.
      - Import header in period `1087`. Light client store period = `1087`.
      The current and next sync committee is not updated, and is now in an
      outdated state. (current sync committee = `1086` and current sync
      committee = `1087`, where it should be current sync committee = `1087`
      and current sync committee = `None`)
      - Import next sync committee for period `1087` (essentially period
      `1088`) fails because the expected next sync committee's roots don't
      match.
      
      ### Bug
      The bug here is that the current and next sync committee's didn't
      handover when an update in the next period was received.
      
      ### Fix
      There are two possible fixes here:
      1. Correctly handover sync committees when a header in the next period
      is received.
      2. Reject updates in the next period until the next sync committee
      period is known.
      
      We opted for solution 2, which is more conservative and requires less
      changes.
      
      ### Polkadot-sdk versions
      This fix should be backported in polkadot-sdk versions 1.7 and up.
      
      Snowfork PR: https://github.com/Snowfork/polkadot-sdk/pull/145
      
      ---------
      
      Co-authored-by: default avatarVincent Geddes <117534+vgeddes@users.noreply.github.com>
      943eb46e
    • polka.dom's avatar
      Remove pallet::getter usage from the democracy pallet (#4472) · 04f88f5b
      polka.dom authored
      As per #3326, removes usage of the pallet::getter macro from the
      democracy pallet. The syntax `StorageItem::<T, I>::get()` should be used
      instead.
      
      cc @muraca
      04f88f5b
    • Oliver Tale-Yazdi's avatar
      [Runtime] Bound XCMP queue (#3952) · 4adfa37d
      Oliver Tale-Yazdi authored
      Re-applying #2302 after increasing the `MaxPageSize`.  
      
      Remove `without_storage_info` from the XCMP queue pallet. Part of
      https://github.com/paritytech/polkadot-sdk/issues/323
      
      Changes:
      - Limit the number of messages and signals a HRMP channel can have at
      most.
      - Limit the number of HRML channels.
      
      A No-OP migration is put in place to ensure that all `BoundedVec`s still
      decode and not truncate after upgrade. The storage version is thereby
      bumped to 5 to have our tooling remind us to deploy that migration.
      
      ## Integration
      
      If you see this error in your try-runtime-cli:  
      ```pre
      Max message size for channel is too large. This means that the V5 migration can be front-run and an
      attacker could place a large message just right before the migration to make other messages un-decodable.
      Please either increase `MaxPageSize` or decrease the `max_message_size` for this channel. Channel max:
      102400, MaxPageSize: 65535
      ```
      
      Then increase the `MaxPageSize` of the `cumulus_pallet_xcmp...
      4adfa37d
    • polka.dom's avatar
      Remove pallet::getter usage from the bounties and child-bounties pallets (#4392) · 6487ac1e
      polka.dom authored
      
      As per #3326, removes pallet::getter usage from the bounties and
      child-bounties pallets. The syntax `StorageItem::<T, I>::get()` should
      be used instead.
      
      Changes to one pallet involved changes in the other, so I figured it'd
      be best to combine these two.
      
      cc @muraca
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      6487ac1e
    • Oliver Tale-Yazdi's avatar
      Deprecate `dmp-queue` pallet (#4475) · 76230a15
      Oliver Tale-Yazdi authored
      `cumulus-pallet-dmp-queue` is not needed anymore since
      https://github.com/paritytech/polkadot-sdk/pull/1246.
      
      The only logic that remains in the pallet is a lazy migration in the
      [`on_idle`](https://github.com/paritytech/polkadot-sdk/blob/8d62c13b
      
      /cumulus/pallets/dmp-queue/src/lib.rs#L158)
      hook.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      76230a15
  12. May 15, 2024
    • Dastan's avatar
      Export all public functions of `sc-service` (#4457) · 59d7e037
      Dastan authored
      
      https://github.com/paritytech/polkadot-sdk/pull/3166 made private
      functions used in `spawn_tasks()` public but forgot to add them in
      exported functions of the crate.
      
      ---------
      
      Co-authored-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      59d7e037
    • Liu-Cheng Xu's avatar
      Fix extrinsics count logging in frame-system (#4461) · 404027e5
      Liu-Cheng Xu authored
      
      The storage item ExtrinsicIndex is already taken before the `finalize()`
      in `note_finished_extrinsics()`, rendering it's always 0 in the log.
      This commit fixes it by using the proper API for extrinsics count.
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      Co-authored-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      404027e5
    • Ankan's avatar
      Introduces: Delegated Staking Pallet (#3904) · 4d47b443
      Ankan authored
      This is the second PR in preparation for
      https://github.com/paritytech/polkadot-sdk/issues/454.
      
      ## Also see
      - **Precursor** https://github.com/paritytech/polkadot-sdk/pull/3889.
      - **Follow up** https://github.com/paritytech/polkadot-sdk/pull/3905.
      
      Overall changes are documented here (lot more visual :heart_eyes:
      
      ):
      https://hackmd.io/@ak0n/454-np-governance
      
      ## Changes
      ### Delegation Interface
      Provides delegation primitives for staking. 
      
      Introduces two new roles:
      - Agent: These are accounts who receive delegation from other accounts
      (delegators) and stakes on behalf of them. The funds are held in
      delegator accounts.
      - Delegator: Accounts who delegate their funds to an agent authorising
      them to use it for staking.
      
      Supports
      - A way for delegators to add or withdraw delegation to an agent.
      - A way for an agent to slash a delegator during a slashing event.
      
      ### Pallet Delegated Staking
      - Implements `DelegationInterface`.
      - Lazy slashing: Any slashes to an Agent is posted in a ledger but not
      immediately slashed. The agent can call
      `DelegationInterface::delegator_slash` to slash the member and clear the
      corresponding slash from its ledger.
      - Consumes `StakingInterface` to provide `CoreStaking` features. In
      reality, this will be `pallet-staking`.
      - Ensures bookkeeping for agent and delegator are correct but leaves the
      management of reward and slash logic upto the consumer of this pallet.
      - While it does not expose any calls yet, it is written with the intent
      of exposing these primitives via extrinsics.
      
      ## TODO
      - [x] Improve unit tests in the pallet.
      - [x] Separate slash reward perbill for rewarding the slash reporters?
      - [x] Review if we should add more events.
      
      ---------
      
      Co-authored-by: default avatarKian Paimani <5588131+kianenigma@users.noreply.github.com>
      Co-authored-by: default avatarGonçalo Pestana <g6pestana@gmail.com>
      Co-authored-by: default avatargeorgepisaltu <52418509+georgepisaltu@users.noreply.github.com>
      4d47b443
    • shamil-gadelshin's avatar
      Change forks pruning algorithm. (#3962) · 9c69bb98
      shamil-gadelshin authored
      
      This PR changes the fork calculation and pruning algorithm to enable
      future block header pruning. It's required because the previous
      algorithm relied on the block header persistence. It follows the
      [related
      discussion](https://github.com/paritytech/polkadot-sdk/issues/1570)
      
      The previous code contained this comment describing the situation:
      ```
      	/// Note a block height finalized, displacing all leaves with number less than the finalized
      	/// block's.
      	///
      	/// Although it would be more technically correct to also prune out leaves at the
      	/// same number as the finalized block, but with different hashes, the current behavior
      	/// is simpler and our assumptions about how finalization works means that those leaves
      	/// will be pruned soon afterwards anyway.
      	pub fn finalize_height(&mut self, number: N) -> FinalizationOutcome<H, N> {
      ```
      
      The previous algorithm relied on the existing block headers to prune
      forks later and to enable block header pruning we need to clear all
      obsolete forks right after the block finalization to not depend on the
      related block headers in the future.
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      9c69bb98
  13. May 13, 2024
    • Éloïs's avatar
      improve MockValidationDataInherentDataProvider to support async backing (#4442) · 594c3ed5
      Éloïs authored
      Support async backing in `--dev` mode
      
      This PR improve the relay mock `MockValidationDataInherentDataProvider`
      to mach expectations of async backing runtimes.
      
      * Add para_head in the mock relay proof
      * Add relay slot in the mock relay proof 
      
      fix https://github.com/paritytech/polkadot-sdk/issues/4437
      594c3ed5
    • Alin Dima's avatar
      prospective-parachains rework (#4035) · d36da12e
      Alin Dima authored
      
      Reworks prospective-parachains so that we allow a number of unconnected
      candidates (for which we don't know the parent candidate yet). Needed
      for elastic scaling:
      https://github.com/paritytech/polkadot-sdk/issues/3541. Without this,
      candidate B will not be validated and backed until candidate A (its
      parent) is validated and a backing statement reaches the validator.
      
      Due to the high complexity of the subsystem, I rewrote parts of it so
      that we don't concern ourselves with candidates which form cycles or
      which form parachain forks. We now have "Fragment chains" instead of
      "Fragment trees". This greatly simplifies some of the code and is a
      compromise we can make. We just need to make sure that cycle-producing
      parachains don't brick the relay chain and that fork-producing
      parachains can still make some progress (on one core at least). The only
      forks that are allowed are those on the relay chain, obviously.
      
      Unconnected candidates are kept in the `CandidateStorage` and whenever a
      new candidate is introduced, we try to repopulate the chain with as many
      candidates as we can.
      
      Also fixes https://github.com/paritytech/polkadot-sdk/issues/3219
      
      Guide changes will be done as part of:
      https://github.com/paritytech/polkadot-sdk/issues/3699
      
      TODOs:
      
      - [x] see if we can replace the `Cow` over the candidate commitments
      with an `Arc` over the entire `ProspectiveCandidate`. It's only being
      overwritten in unit tests. We can work around that.
      - [x] finish fragment_chain unit tests
      - [x] add more prospective-parachains subsystem tests
      - [x] test with zombienet what happens if a parachain is creating cycles
      (it should not brick the relay chain).
      - [x] test with zombienet a parachain that is creating forks. it should
      keep producing blocks from time to time (one bad collator should not DOS
      the parachain, even if throughput decreases)
      - [x] add some more logs and metrics
      - [x] add prdoc and remove the "silent" label
      
      ---------
      
      Signed-off-by: default avatarAndrei Sandu <andrei-mihail@parity.io>
      Co-authored-by: default avatarAndrei Sandu <andrei-mihail@parity.io>
      d36da12e
    • Sebastian Kunert's avatar
      `CheckWeight` SE: Check for extrinsic length + proof size combined (#4326) · 6d3a6d85
      Sebastian Kunert authored
      Currently the `CheckWeight` `SignedExtension` was tracking the size of
      the proof and the extrinsic length separately. But in reality we need
      one more check that ensures we don't hit the PoV limit with both
      combined.
      
      The rest of the logic remains unchanged. One scenario where the changes
      make a difference is when we enter this branch:
      
      https://github.com/paritytech/polkadot-sdk/blob/f34d8e3c
      
      /substrate/frame/system/src/extensions/check_weight.rs#L185-L198
      
      This was previously allowing to some extrinsics that is exceeding the
      block limit but are withing the reserved area of `BlockWeights`. This
      will now be caught by the later check I introduced. I think the new
      behaviour makes sense, since the proof size dimension is designed for
      parachains and they don't want to go over the limit and get rejected.
      
      
      In the long run we should maybe get rid of `RuntimeBlockLength`
      alltogether, however that would require a deprecation process and can
      come at a later point.
      
      ---------
      
      Co-authored-by: default avatarAdrian Catangiu <adrian@parity.io>
      6d3a6d85
    • Oliver Tale-Yazdi's avatar
      Rococo AH: undeploy trie migration (#4414) · 805d54dd
      Oliver Tale-Yazdi authored
      
      The state-trie migration is completed on Rococo Asset-Hub as
      double-checked
      [here](https://github.com/paritytech/polkadot-sdk/issues/4174#issuecomment-2097895275).
      Undeploying now.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      805d54dd