Skip to content
  1. Mar 26, 2024
    • Pavel Orlov's avatar
      XCM Fee Payment Runtime API (#3607) · 3c972fc1
      Pavel Orlov authored
      
      
      The PR provides API for obtaining:
      - the weight required to execute an XCM message,
      - a list of acceptable `AssetId`s for message execution payment,
      - the cost of the weight in the specified acceptable `AssetId`.
      
      It is meant to address an issue where one has to guess how much fee to
      pay for execution. Also, at the moment, a client has to guess which
      assets are acceptable for fee execution payment.
      See the related issue
      https://github.com/paritytech/polkadot-sdk/issues/690.
      With this API, a client is supposed to query the list of the supported
      asset IDs (in the XCM version format the client understands), weigh the
      XCM program the client wants to execute and convert the weight into one
      of the acceptable assets. Note that the client is supposed to know what
      program will be executed on what chains. However, having a small
      companion JS library for the pallet-xcm and xtokens should be enough to
      determine what XCM programs will be executed and where (since these
      pallets compose a known small set of programs).
      ```Rust
      pub trait XcmPaymentApi<Call>
      	where
      		Call: Codec,
      	{
      		/// Returns a list of acceptable payment assets.
      		///
      		/// # Arguments
      		///
      		/// * `xcm_version`: Version.
      		fn query_acceptable_payment_assets(xcm_version: Version) -> Result<Vec<VersionedAssetId>, Error>;
      		/// Returns a weight needed to execute a XCM.
      		///
      		/// # Arguments
      		///
      		/// * `message`: `VersionedXcm`.
      		fn query_xcm_weight(message: VersionedXcm<Call>) -> Result<Weight, Error>;
      		/// Converts a weight into a fee for the specified `AssetId`.
      		///
      		/// # Arguments
      		///
      		/// * `weight`: convertible `Weight`.
      		/// * `asset`: `VersionedAssetId`.
      		fn query_weight_to_asset_fee(weight: Weight, asset: VersionedAssetId) -> Result<u128, Error>;
      		/// Get delivery fees for sending a specific `message` to a `destination`.
      		/// These always come in a specific asset, defined by the chain.
      		///
      		/// # Arguments
      		/// * `message`: The message that'll be sent, necessary because most delivery fees are based on the
      		///   size of the message.
      		/// * `destination`: The destination to send the message to. Different destinations may use
      		///   different senders that charge different fees.
      		fn query_delivery_fees(destination: VersionedLocation, message: VersionedXcm<()>) -> Result<VersionedAssets, Error>;
      	}
      ```
      An
      [example](https://gist.github.com/PraetorP/4bc323ff85401abe253897ba990ec29d)
      of a client side code.
      
      ---------
      
      Co-authored-by: default avatarFrancisco Aguirre <[email protected]>
      Co-authored-by: default avatarAdrian Catangiu <[email protected]>
      Co-authored-by: default avatarDaniel Shiposha <[email protected]>
      3c972fc1
    • Bastian Köcher's avatar
      westend: `SignedPhase` is a constant (#3646) · 0c15d887
      Bastian Köcher authored
      
      
      In preparation for the merkleized metadata, we need to ensure that
      constants are actually constant. If we want to test the unsigned phase
      we could for example just disable signed voter. Or we add some extra
      mechanism to the pallet to disable the signed phase from time to time.
      
      ---------
      
      Co-authored-by: default avatarAnkan <[email protected]>
      Co-authored-by: default avatarKian Paimani <[email protected]>
      0c15d887
    • Tsvetomir Dimitrov's avatar
      Migrate parachain swaps to Coretime (#3714) · 90234543
      Tsvetomir Dimitrov authored
      This PR notifies broker pallet for any parachain slot swaps performed on
      the relay chain. This is achieved by registering an `OnSwap` for the the
      `coretime` pallet. The hook sends XCM message to the broker chain and
      invokes a new extrinsic `swap_leases` which updates `Leases` storage
      item (which keeps the legacy parachain leases).
      
      I made two assumptions in this PR:
      1.
      [`Leases`](https://github.com/paritytech/polkadot-sdk/blob/4987d798/substrate/frame/broker/src/lib.rs#L120)
      in `broker` pallet and
      [`Leases`](https://github.com/paritytech/polkadot-sdk/blob/4987d798
      
      /polkadot/runtime/common/src/slots/mod.rs#L118)
      in `slots` pallet are in sync.
      2. `swap_leases` extrinsic from `broker` pallet can be triggered only by
      root or by the XCM message from the relay chain. If not - the extrinsic
      will generate an error and do nothing.
      
      As a side effect from the changes `OnSwap` trait is moved from
      runtime/common/traits.rs to runtime/parachains. Otherwise it is not
      accessible from `broker` pallet.
      
      Closes https://github.com/paritytech/polkadot-sdk/issues/3552
      
      TODOs:
      
      - [x] Weights
      - [x] Tests
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatareskimor <[email protected]>
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      90234543
    • Andrei Eres's avatar
      [subsystem-benchmarks] Save results to json (#3829) · fd79b3b0
      Andrei Eres authored
      
      
      Here we add the ability to save subsystem benchmark results in JSON
      format to display them as graphs
      
      To draw graphs, CI team will use
      [github-action-benchmark](https://github.com/benchmark-action/github-action-benchmark).
      Since we are using custom benchmarks, we need to prepare [a specific
      data
      type](https://github.com/benchmark-action/github-action-benchmark?tab=readme-ov-file#examples):
      ```
      [
          {
              "name": "CPU Load",
              "unit": "Percent",
              "value": 50
          }
      ]
      ```
      
      Then we'll get graphs like this: 
      
      ![example](https://raw.githubusercontent.com/rhysd/ss/master/github-action-benchmark/main.png)
      
      [A live page with
      graphs](https://benchmark-action.github.io/github-action-benchmark/dev/bench/)
      
      ---------
      
      Co-authored-by: default avatarordian <[email protected]>
      fd79b3b0
    • Dcompoze's avatar
      Fix spelling mistakes across the whole repository (#3808) · 002d9260
      Dcompoze authored
      **Update:** Pushed additional changes based on the review comments.
      
      **This pull request fixes various spelling mistakes in this
      repository.**
      
      Most of the changes are contained in the first **3** commits:
      
      - `Fix spelling mistakes in comments and docs`
      
      - `Fix spelling mistakes in test names`
      
      - `Fix spelling mistakes in error messages, panic messages, logs and
      tracing`
      
      Other source code spelling mistakes are separated into individual
      commits for easier reviewing:
      
      - `Fix the spelling of 'authority'`
      
      - `Fix the spelling of 'REASONABLE_HEADERS_IN_JUSTIFICATION_ANCESTRY'`
      
      - `Fix the spelling of 'prev_enqueud_messages'`
      
      - `Fix the spelling of 'endpoint'`
      
      - `Fix the spelling of 'children'`
      
      - `Fix the spelling of 'PenpalSiblingSovereignAccount'`
      
      - `Fix the spelling of 'PenpalSudoAccount'`
      
      - `Fix the spelling of 'insufficient'`
      
      - `Fix the spelling of 'PalletXcmExtrinsicsBenchmark'`
      
      - `Fix the spelling of 'subtracted'`
      
      - `Fix the spelling of 'CandidatePendingAvailability'`
      
      - `Fix the spelling of 'exclusive'`
      
      - `Fix the spelling of 'until'`
      
      - `Fix the spelling of 'discriminator'`
      
      - `Fix the spelling of 'nonexistent'`
      
      - `Fix the spelling of 'subsystem'`
      
      - `Fix the spelling of 'indices'`
      
      - `Fix the spelling of 'committed'`
      
      - `Fix the spelling of 'topology'`
      
      - `Fix the spelling of 'response'`
      
      - `Fix the spelling of 'beneficiary'`
      
      - `Fix the spelling of 'formatted'`
      
      - `Fix the spelling of 'UNKNOWN_PROOF_REQUEST'`
      
      - `Fix the spelling of 'succeeded'`
      
      - `Fix the spelling of 'reopened'`
      
      - `Fix the spelling of 'proposer'`
      
      - `Fix the spelling of 'InstantiationNonce'`
      
      - `Fix the spelling of 'depositor'`
      
      - `Fix the spelling of 'expiration'`
      
      - `Fix the spelling of 'phantom'`
      
      - `Fix the spelling of 'AggregatedKeyValue'`
      
      - `Fix the spelling of 'randomness'`
      
      - `Fix the spelling of 'defendant'`
      
      - `Fix the spelling of 'AquaticMammal'`
      
      - `Fix the spelling of 'transactions'`
      
      - `Fix the spelling of 'PassingTracingSubscriber'`
      
      - `Fix the spelling of 'TxSignaturePayload'`
      
      - `Fix the spelling of 'versioning'`
      
      - `Fix the spelling of 'descendant'`
      
      - `Fix the spelling of 'overridden'`
      
      - `Fix the spelling of 'network'`
      
      Let me know if this structure is adequate.
      
      **Note:** The usage of the words `Merkle`, `Merkelize`, `Merklization`,
      `Merkelization`, `Merkleization`, is somewhat inconsistent but I left it
      as it is.
      
      ~~**Note:** In some places the term `Receival` is used to refer to
      message reception, IMO `Reception` is the correct word here, but I left
      it as it is.~~
      
      ~~**Note:** In some places the term `Overlayed` is used instead of the
      more acceptable version `Overlaid` but I also left it as it is.~~
      
      ~~**Note:** In some places the term `Applyable` is used instead of the
      correct version `Applicable` but I also left it as it is.~~
      
      **Note:** Some usage of British vs American english e.g. `judgement` vs
      `judgment`, `initialise` vs `initialize`, `optimise` vs `optimize` etc.
      are both present in different places, but I suppose that's
      understandable given the number of contributors.
      
      ~~**Note:** There is a spelling mistake in `.github/CODEOWNERS` but it
      triggers errors in CI when I make changes to it, so I left it as it
      is.~~
      002d9260
    • Serban Iorga's avatar
      Update bridges subtree (#3841) · b839c995
      Serban Iorga authored
      Updating the bridges subtree hopefully just one last time in this
      formula in order to make the final migration less verbose.
      b839c995
    • Dcompoze's avatar
      Fix formatting in Cargo.toml (#3842) · ea97863c
      Dcompoze authored
      Fixes formatting for
      https://github.com/paritytech/polkadot-sdk/pull/3698
      ea97863c
  2. Mar 25, 2024
  3. Mar 24, 2024
  4. Mar 23, 2024
  5. Mar 22, 2024
    • girazoki's avatar
      [pallet-xcm] fix transport fees for remote reserve transfers (#3792) · 9a04ebbf
      girazoki authored
      Currently `transfer_assets` from pallet-xcm covers 4 main different
      transfer types:
      - `localReserve`
      - `DestinationReserve`
      - `Teleport`
      - `RemoteReserve`
      
      For the first three, the local execution and the remote message sending
      are separated, and fees are deducted in pallet-xcm itself:
      https://github.com/paritytech/polkadot-sdk/blob/3410dfb3
      
      /polkadot/xcm/pallet-xcm/src/lib.rs#L1758.
      
      For the 4th case `RemoteReserve`, pallet-xcm is still relying on the
      xcm-executor itself to send the message (through the
      `initiateReserveWithdraw` instruction). In this case, if delivery fees
      need to be charged, it is not possible to do so because the
      `jit_withdraw` mode has not being set.
      
      This PR proposes to still use the `initiateReserveWithdraw` but
      prepending a `setFeesMode { jit_withdraw: true }` to make sure delivery
      fees can be paid.
      
      A test-case is also added to present the aforementioned case
      
      ---------
      
      Co-authored-by: default avatarAdrian Catangiu <[email protected]>
      9a04ebbf
    • PG Herveou's avatar
      XCM remove extra QueryId types from traits (#3763) · 2f59e9ef
      PG Herveou authored
      We do not need to make these traits generic over QueryId type, we can
      just use the QueryId alias everywhere
      2f59e9ef
    • Dmitry Markin's avatar
      Make public addresses go first in authority discovery DHT records (#3757) · 9d2963c2
      Dmitry Markin authored
      Make sure explicitly set by the operator public addresses go first in
      the authority discovery DHT records.
      
      Also update `Discovery` behavior to eliminate duplicates in the returned
      addresses.
      
      This PR should improve situation with
      https://github.com/paritytech/polkadot-sdk/issues/3519.
      
      Obsoletes https://github.com/paritytech/polkadot-sdk/pull/3657.
      9d2963c2
    • Vincent Geddes's avatar
      Add a linear fee multiplier (#127) (#3790) · 22d5b80d
      Vincent Geddes authored
      
      
      Bridging fees are calculated using a static ETH/DOT exchange rate that
      can deviate significantly from the real-world exchange rate. We
      therefore need to add a safety margin to the fee so that users almost
      aways cover the cost of relaying.
      
      # FAQ
      
      > Why introduce a `multiplier` parameter instead of configuring an
      exchange rate which already has a safety factor applied?
      
      When converting from ETH to DOT, we need to _divide_ the multiplier by
      the exchange rate, and to convert from DOT to ETH we need to _multiply_
      the multiplier by the exchange rate.
      
      > Other input parameters to the fee calculation can also deviate from
      real-world values. These include substrate weights, gas prices, and so
      on. Why does the multiplier introduced here not adjust those?
      
      A single scalar multiplier won't be able to accommodate the different
      volatilities efficiently. For example, gas prices are much more volatile
      than exchange rates, and substrate weights hardly ever change.
      
      So the pricing config relating to weights and gas prices should already
      have some appropriate safety margin pre-applied.
      
      # Detailed Changes:
      
      * Added `multiplier` field to `PricingParameters`
      * Outbound-queue fee is multiplied by `multiplier`
      * This `multiplier` is synced to the Ethereum side
      * Improved Runtime API for calculating outbound-queue fees. This API
      makes it much easier to for configure parts of the system in preparation
      for launch.
      * Improve and clarify code documentation
      
      Upstreamed from https://github.com/Snowfork/polkadot-sdk/pull/127
      
      ---------
      
      Co-authored-by: default avatarClara van Staden <[email protected]>
      Co-authored-by: default avatarAdrian Catangiu <[email protected]>
      22d5b80d
    • Clara van Staden's avatar
      Snowbridge Beacon header age check (#3727) · 3410dfb3
      Clara van Staden authored
      ## Bug Explanation
      Adds a check that prevents finalized headers with a gap larger than the
      sync committee period being imported, which could cause execution
      headers in the gap being unprovable. The current version of the Ethereum
      client checks that there is a header at least every sync committee, but
      it doesn't check that the headers are within a sync period of each
      other. For example:
      
      Header 100 (sync committee period 1)
      Header 9000 (sync committee period 2)
      (8900 blocks apart)
      
      These headers are in adjacent sync committees, but more than the sync
      committee period (8192 blocks) apart.
      
      The reason we need a header every 8192 slots at least, is the header is
      used to prove messages within the last 8192 blocks. If we import header
      9000, and we receive a message to be verified at header 200, the
      `block_roots` field of header 9000 won't contain the header in order to
      do the ancestry check.
      
      ## Environment
      While running in Rococo, this edge case was discovered after the relayer
      was offline for a few days. It is unlikely, but not impossible, to
      happen again and so it should be backported to polkadot-sdk 1.7.0 (so
      that
      [polkadot-fellows/runtimes](https://github.com/polkadot-fellows/runtimes)
      can be updated with the fix).
      
      Our Ethereum client has been operational on Rococo for the past few
      months, and this been the only major issue discovered so far.
      
      ### Unrelated Change
      An unrelated nit: Removes a left over file that should have been deleted
      when the `parachain` directory was removed.
      
      ---------
      
      Co-authored-by: claravanstaden <Cats 4 life!>
      3410dfb3
    • Will | Paradox | ParaNodes.io's avatar
      Adding LF's bootnodes to relay and system chains (#3514) · ea5f4e9a
      Will | Paradox | ParaNodes.io authored
      
      
      Good day,
      
      I'm seeking to add the following bootnodes for Kusama and Polkadot's
      relay and system chains. The following commands can be used to test
      connectivity. All node keys are backed up.
      
      Polkadot:
      ```
      polkadot --chain polkadot --base-path /tmp/node --name "Boot" --reserved-only --reserved-nodes "/dns/boot-polkadot.luckyfriday.io/tcp/443/wss/p2p/12D3KooWAdyiVAaeGdtBt6vn5zVetwA4z4qfm9Fi2QCSykN1wTBJ" --no-hardware-benchmarks
      ```
      
      
      Assethub-Polkadot:
      
      ```
      polkadot-parachain --chain asset-hub-polkadot --base-path /tmp/node --name "Boot" --reserved-only --reserved-nodes "/dns/boot-polkadot-assethub.luckyfriday.io/tcp/443/wss/p2p/12D3KooWDR9M7CjV1xdjCRbRwkFn1E7sjMaL4oYxGyDWxuLrFc2J" --no-hardware-benchmarks
      
      ```
      
      Bridgehub-Polkadot:
      
      ```
      polkadot-parachain --chain bridge-hub-polkadot --base-path /tmp/node --name "Boot" --reserved-only --reserved-nodes "/dns/boot-polkadot-bridgehub.luckyfriday.io/tcp/443/wss/p2p/12D3KooWKf3mBXHjLbwtPqv1BdbQuwbFNcQQYxASS7iQ25264AXH" --no-hardware-benchmarks
      
      ```
      Collectives-Polkadot
      
      ```
      polkadot-parachain --chain collectives-polkadot --base-path /tmp/node --name "Boot" --reserved-only --reserved-nodes "/dns/boot-polkadot-collectives.luckyfriday.io/tcp/443/wss/p2p/12D3KooWCzifnPooTt4kvTnXT7FTKTymVL7xn7DURQLsS2AKpf6w" --no-hardware-benchmarks
      
      ```
      Kusama:
      
      ```
      polkadot --chain kusama --base-path /tmp/node --name "Boot" --reserved-only --reserved-nodes "/dns/boot-kusama.luckyfriday.io/tcp/443/wss/p2p/12D3KooWS1Lu6DmK8YHSvkErpxpcXmk14vG6y4KVEFEkd9g62PP8" --no-hardware-benchmarks
      
      ```
      Assethub-Kusama:
      
      ```
      polkadot-parachain --chain asset-hub-kusama --base-path /tmp/node --name "Boot" --reserved-only --reserved-nodes "/dns/boot-kusama-assethub.luckyfriday.io/tcp/443/wss/p2p/12D3KooWSwaeFs6FNgpgh54fdoxSDAA4nJNaPE3PAcse2GRrG7b3" --no-hardware-benchmarks
      ```
      
      Bridgehub-Kusama:
      
      ```
      polkadot-parachain --chain bridge-hub-kusama --base-path /tmp/node --name "Boot" --reserved-only --reserved-nodes "/dns/boot-kusama-bridgehub.luckyfriday.io/tcp/443/wss/p2p/12D3KooWQybw6AFmAvrFfwUQnNxUpS12RovapD6oorh2mAJr4xyd" --no-hardware-benchmarks
      ```
      
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      ea5f4e9a
  6. Mar 21, 2024
    • Alejandro Martinez Andres's avatar
      Revert `SendXcmOrigin` in Rococo & Westend (#2571) · 01d65f6b
      Alejandro Martinez Andres authored
      
      
      Based on issue
      [#2512](https://github.com/paritytech/polkadot-sdk/issues/2512), it
      seems that some ecosystem teams are using these networks to set up their
      staging environments and test certain use cases, some of them involving
      sending XCMs from the relay with origins not allowed in the current
      configuration.
      
      This change reverts the configuration of `SendXcmOrigin`.
      
      ---------
      
      Co-authored-by: default avatarAdrian Catangiu <[email protected]>
      01d65f6b
    • Tsvetomir Dimitrov's avatar
      Fix toml formatting (#3782) · 46ba8550
      Tsvetomir Dimitrov authored
      Make taplo happy
      46ba8550
    • kvalerio's avatar
      there's a typo (#3779) · 9922fd39
      kvalerio authored
      
      
      There was a typo, so now, there's no more typo.
      
      Co-authored-by: default avatarLiam Aharon <[email protected]>
      9922fd39
    • ordian's avatar
      approval-voting: remove some inefficiences on startup (#3747) · 64a707a4
      ordian authored
      Small refactoring to reduce the algorithmic complexity of the initial
      message distribution in approval voting after a sync from O(n_candidates
      ^ 2) to O(n_candidates).
      64a707a4
    • Alin Dima's avatar
      Elastic scaling: runtime dependency tracking and enactment (#3479) · 4842faf6
      Alin Dima authored
      
      
      Changes needed to implement the runtime part of elastic scaling:
      https://github.com/paritytech/polkadot-sdk/issues/3131,
      https://github.com/paritytech/polkadot-sdk/issues/3132,
      https://github.com/paritytech/polkadot-sdk/issues/3202
      
      Also fixes https://github.com/paritytech/polkadot-sdk/issues/3675
      
      TODOs:
      
      - [x] storage migration
      - [x] optimise process_candidates from O(N^2)
      - [x] drop backable candidates which form cycles
      - [x] fix unit tests
      - [x] add more unit tests
      - [x] check the runtime APIs which use the pending availability storage.
      We need to expose all of them, see
      https://github.com/paritytech/polkadot-sdk/issues/3576
      - [x] optimise the candidate selection. we're currently picking randomly
      until we satisfy the weight limit. we need to be smart about not
      breaking candidate chains while being fair to all paras -
      https://github.com/paritytech/polkadot-sdk/pull/3573
      
      Relies on the changes made in
      https://github.com/paritytech/polkadot-sdk/pull/3233 in terms of the
      inclusion policy and the candidate ordering
      
      ---------
      
      Signed-off-by: default avataralindima <[email protected]>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatareskimor <[email protected]>
      4842faf6
    • Egor_P's avatar
      [Backport] Reformat release notes generation (#3759) · 75074952
      Egor_P authored
      This PR backports small reformatting of the release notes templates.
      75074952
    • Egor_P's avatar
      [Backport] version bumps and prdocs reordering 1.9.0 (#3758) · 7b6b061e
      Egor_P authored
      This PR backports:
      - node version bump
      - `spec_vesion` bump
      - reordering of the `prdocs` to the appropriate folder
      from the `1.9.0` release branch
      7b6b061e
    • gupnik's avatar
      Migrates Westend to Runtime V2 (#3754) · 93b1abb2
      gupnik authored
      Step in https://github.com/paritytech/polkadot-sdk/issues/3688
      93b1abb2
  7. Mar 20, 2024
    • s0me0ne-unkn0wn's avatar
      Enable PoV reclaim on `rococo-parachain` (#3765) · 1da8a6b8
      s0me0ne-unkn0wn authored
      This PR proposes enabling PoV reclaim on the `rococo-parachain`
      testchain to streamline testing and development of high-TPS stuff.
      1da8a6b8
    • eskimor's avatar
      Fix algorithmic complexity of on-demand scheduler with regards to number of cores. (#3190) · b74353d3
      eskimor authored
      
      
      We witnessed really poor performance on Rococo, where we ended up with
      50 on-demand cores. This was due to the fact that for each core the full
      queue was processed. With this change full queue processing will happen
      way less often (most of the time complexity is O(1) or O(log(n))) and if
      it happens then only for one core (in expectation).
      
      Also spot price is now updated before each order to ensure economic back
      pressure.
      
      
      TODO:
      
      - [x] Implement
      - [x] Basic tests
      - [x] Add more tests (see todos)
      - [x] Run benchmark to confirm better performance, first results suggest
      > 100x faster.
      - [x] Write migrations
      - [x] Bump scale-info version and remove patch in Cargo.toml
      - [x] Write PR docs: on-demand performance improved, more on-demand
      cores are now non problematic anymore. If need by also the max queue
      size can be increased again. (Maybe not to 10k)
      
      Optional: Performance can be improved even more, if we called
      `pop_assignment_for_core()`, before calling `report_processed` (Avoid
      needless affinity drops). The effect gets smaller the larger the claim
      queue and I would only go for it, if it does not add complexity to the
      scheduler.
      
      ---------
      
      Co-authored-by: default avatareskimor <[email protected]>
      Co-authored-by: default avatarantonva <[email protected]>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarAnton Vilhelm Ásgeirsson <[email protected]>
      Co-authored-by: default avatarordian <[email protected]>
      b74353d3
    • bader y's avatar
      Defensive Programming in Substrate Reference Document (#2615) · b686bfef
      bader y authored
      
      
      _This PR is being continued from
      https://github.com/paritytech/polkadot-sdk/pull/2206, which was closed
      when the developer_hub was merged._
      closes https://github.com/paritytech/polkadot-sdk-docs/issues/44
      
      ---
      # Description
      
      This PR adds a reference document to the `developer-hub` crate (see
      https://github.com/paritytech/polkadot-sdk/pull/2102). This specific
      reference document covers defensive programming practices common within
      the context of developing a runtime with Substrate.
      
      In particular, this covers the following areas: 
      
      - Default behavior of how Rust deals with numbers in general
      - How to deal with floating point numbers in runtime / fixed point
      arithmetic
      - How to deal with Integer overflows
      - General "safe math" / defensive programming practices for common
      pallet development scenarios
      - Defensive traits that exist within Substrate, i.e.,
      `defensive_saturating_add `, `defensive_unwrap_or`
      - More general defensive programming examples (keep it concise)
      - Link to relevant examples where these practices are actually in
      production / being used
      - Unwrapping (or rather lack thereof) 101
      
      todo
      -- 
      - [x] Apply feedback from previous PR
      - [x] This may warrant a PR to append some of these docs to
      `sp_arithmetic`
      
      ---------
      
      Co-authored-by: default avatarOliver Tale-Yazdi <[email protected]>
      Co-authored-by: default avatarGonçalo Pestana <[email protected]>
      Co-authored-by: default avatarKian Paimani <[email protected]>
      Co-authored-by: default avatarFrancisco Aguirre <[email protected]>
      Co-authored-by: default avatarRadha <[email protected]>
      b686bfef
    • slicejoke's avatar
      Fix typos (#3753) · 7241a8db
      slicejoke authored
      7241a8db
    • dependabot[bot]'s avatar
      Bump anyhow from 1.0.75 to 1.0.81 (#3752) · bb973aa0
      dependabot[bot] authored
      Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.75 to 1.0.81.
      <details>
      <summary>Release notes</summary>
      <p><em>Sourced from <a
      href="https://github.com/dtolnay/anyhow/releases">anyhow's
      releases</a>.</em></p>
      <blockquote>
      <h2>1.0.81</h2>
      <ul>
      <li>Make backtrace support available when using -Dwarnings (<a
      href="https://redirect.github.com/dtolnay/anyhow/issues/354">#354</a>)</li>
      </ul>
      <h2>1.0.80</h2>
      <ul>
      <li>Fix unused_imports warnings when compiled by rustc 1.78</li>
      </ul>
      <h2>1.0.79</h2>
      <ul>
      <li>Work around improperly cached build script result by sccache (<a
      href="https://redirect.github.com/dtolnay/anyhow/issues/340">#340</a>)</li>
      </ul>
      <h2>1.0.78</h2>
      <ul>
      <li>Reduce spurious rebuilds under RustRover IDE when using a nightly
      toolchain (<a
      href="https://redirect.github.com/dtolnay/anyhow/issues/337">#337</a>)</li>
      </ul>
      <h2>1.0.77</h2>
      <ul>
      <li>Make <code>anyhow::Error::backtrace</code> available on stable Rust
      compilers 1.65+ (<a
      href="https://redirect.github.com/dtolnay/anyhow/issues/293">#293</a>,
      thanks <a
      href="https://github.com/LukasKalbertodt"><code>@​LukasKalbertodt</code></a>)</li>
      </ul>
      <h2>1.0.76</h2>
      <ul>
      <li>Opt in to <code>unsafe_op_in_unsafe_fn</code> lint (<a
      href="https://redirect.github.com/dtolnay/anyhow/issues/329">#329</a>)</li>
      </ul>
      </blockquote>
      </details>
      <details>
      <summary>Commits</summary>
      <ul>
      <li><a
      href="https://github.com/dtolnay/anyhow/commit/4aad4edebd9f09247d6c6b6784419a74bb116829"><code>4aad4ed</code></a>
      Release 1.0.81</li>
      <li><a
      href="https://github.com/dtolnay/anyhow/commit/8be90917c603199c5d1fdd73984237f023768e22"><code>8be9091</code></a>
      Merge pull request <a
      href="https://redirect.github.com/dtolnay/anyhow/issues/354">#354</a>
      from dtolnay/deadcode</li>
      <li><a
      href="https://github.com/dtolnay/anyhow/commit/a2eb7dd5e13add83f254b6dac0f68e043effc521"><code>a2eb7dd</code></a>
      Make compatible with -Dwarnings</li>
      <li><a
      href="https://github.com/dtolnay/anyhow/commit/54437197ee79c20678db433d98616fab7ddff1a5"><code>5443719</code></a>
      Release 1.0.80</li>
      <li><a
      href="https://github.com/dtolnay/anyhow/commit/dfc7bc07d4c41b61093c3251ed82becb51810bd4"><code>dfc7bc0c
      
      </code></a>
      Work around prelude redundant import warnings</li>
      <li><a
      href="https://github.com/dtolnay/anyhow/commit/6e4f86b48b5182ec71dbc8e308db9dc91e2ec8a5"><code>6e4f86b</code></a>
      Import from alloc not std, where possible</li>
      <li><a
      href="https://github.com/dtolnay/anyhow/commit/f885a133ede579c45e90ab489455126403d05db1"><code>f885a13</code></a>
      Ignore incompatible_msrv clippy false positives in test</li>
      <li><a
      href="https://github.com/dtolnay/anyhow/commit/fefbcbcb0b336a2d6c2ce6f0ee6d3fd02ef2cd3b"><code>fefbcbc</code></a>
      Ignore incompatible_msrv clippy lint</li>
      <li><a
      href="https://github.com/dtolnay/anyhow/commit/78f2d81cc71b79050a2fda270c45ff267557d853"><code>78f2d81</code></a>
      Update ui test suite to nightly-2024-02-08</li>
      <li><a
      href="https://github.com/dtolnay/anyhow/commit/edd88d3a43f11f1931330d3dd54189353ef00203"><code>edd88d3</code></a>
      Update ui test suite to nightly-2024-01-31</li>
      <li>Additional commits viewable in <a
      href="https://github.com/dtolnay/anyhow/compare/1.0.75...1.0.81">compare
      view</a></li>
      </ul>
      </details>
      <br />
      
      
      [![Dependabot compatibility
      score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=anyhow&package-manager=cargo&previous-version=1.0.75&new-version=1.0.81)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
      
      Dependabot will resolve any conflicts with this PR as long as you don't
      alter it yourself. You can also trigger a rebase manually by commenting
      `@dependabot rebase`.
      
      [//]: # (dependabot-automerge-start)
      [//]: # (dependabot-automerge-end)
      
      ---
      
      <details>
      <summary>Dependabot commands and options</summary>
      <br />
      
      You can trigger Dependabot actions by commenting on this PR:
      - `@dependabot rebase` will rebase this PR
      - `@dependabot recreate` will recreate this PR, overwriting any edits
      that have been made to it
      - `@dependabot merge` will merge this PR after your CI passes on it
      - `@dependabot squash and merge` will squash and merge this PR after
      your CI passes on it
      - `@dependabot cancel merge` will cancel a previously requested merge
      and block automerging
      - `@dependabot reopen` will reopen this PR if it is closed
      - `@dependabot close` will close this PR and stop Dependabot recreating
      it. You can achieve the same result by closing it manually
      - `@dependabot show <dependency name> ignore conditions` will show all
      of the ignore conditions of the specified dependency
      - `@dependabot ignore this major version` will close this PR and stop
      Dependabot creating any more for this major version (unless you reopen
      the PR or upgrade to it yourself)
      - `@dependabot ignore this minor version` will close this PR and stop
      Dependabot creating any more for this minor version (unless you reopen
      the PR or upgrade to it yourself)
      - `@dependabot ignore this dependency` will close this PR and stop
      Dependabot creating any more for this dependency (unless you reopen the
      PR or upgrade to it yourself)
      
      
      </details>
      
      Signed-off-by: default avatardependabot[bot] <[email protected]>
      Co-authored-by: default avatardependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
      bb973aa0
    • Tsvetomir Dimitrov's avatar
      Expose `ClaimQueue` via a runtime api and use it in `collation-generation` (#3580) · e58e854a
      Tsvetomir Dimitrov authored
      The PR adds two things:
      1. Runtime API exposing the whole claim queue
      2. Consumes the API in `collation-generation` to fetch the next
      scheduled `ParaEntry` for an occupied core.
      
      Related to https://github.com/paritytech/polkadot-sdk/issues/1797
      e58e854a
  8. Mar 19, 2024
    • PG Herveou's avatar
      Contracts: Test benchmarking v2 (#3585) · e659c4b3
      PG Herveou authored
      Co-authored-by: command-bot <>
      e659c4b3
    • Javier Viola's avatar
      chore: bump zombienet version (1.3.95) (#3745) · c486da32
      Javier Viola authored
      Bump zombienet version, this version have the latest version of
      `@polkadot/api` module and fix the failures in CI (e.g
      https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5570106).
      
      Thanks!
      c486da32
    • Davide Galassi's avatar
      Implement crypto byte array newtypes in term of a shared type (#3684) · 1e9fd237
      Davide Galassi authored
      Introduces `CryptoBytes` type defined as:
      
      ```rust
      pub struct CryptoBytes<const N: usize, Tag = ()>(pub [u8; N], PhantomData<fn() -> Tag>);
      ```
      
      The type implements a bunch of methods and traits which are typically
      expected from a byte array newtype
      (NOTE: some of the methods and trait implementations IMO are a bit
      redundant, but I decided to maintain them all to not change too much
      stuff in this PR)
      
      It also introduces two (generic) typical consumers of `CryptoBytes`:
      `PublicBytes` and `SignatureBytes`.
      
      ```rust
      pub struct PublicTag;
      pub PublicBytes<const N: usize, CryptoTag> = CryptoBytes<N, (PublicTag, CryptoTag)>;
      
      pub struct SignatureTag;
      pub SignatureBytes<const N: usize, CryptoTag> = CryptoBytes<N, (SignatureTag, CryptoTag)>;
      ```
      
      Both of them use a tag to differentiate the two types at a higher level.
      Downstream specializations will further specialize using a dedicated
      crypto tag. For example in ECDSA:
      
      
      ```rust
      pub struct EcdsaTag;
      
      pub type Public = PublicBytes<PUBLIC_KEY_SERIALIZED_SIZE, EcdsaTag>;
      pub type Signature = PublicBytes<PUBLIC_KEY_SERIALIZED_SIZE, EcdsaTag>;
      ```
      
      Overall we have a cleaner and most importantly **consistent** code for
      all the types involved
      
      All these details are opaque to the end user which can use `Public` and
      `Signature` for the cryptos as before
      1e9fd237
    • ordian's avatar
      collator-side: send parent head data (#3521) · 5fd72a1f
      ordian authored
      
      
      On top of #3302.
      
      We want the validators to upgrade first before we add changes to the
      collation side to send the new variants, which is why this part is
      extracted into a separate PR.
      
      The detection of when to send the parent head is based on the core
      assignments at the relay parent of the candidate. We probably want to
      make it more flexible in the future, but for now, it will work for a
      simple use case when a para always has multiple cores assigned to it.
      
      ---------
      
      Signed-off-by: default avatarMatteo Muraca <[email protected]>
      Signed-off-by: default avatardependabot[bot] <[email protected]>
      Co-authored-by: default avatarMatteo Muraca <[email protected]>
      Co-authored-by: default avatardependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
      Co-authored-by: default avatarJuan Ignacio Rios <[email protected]>
      Co-authored-by: default avatarBranislav Kontur <[email protected]>
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      5fd72a1f
    • Alexandru Vasile's avatar
      rpc: Enable `transaction_unstable_broadcast` RPC V2 API (#3713) · 923f27cc
      Alexandru Vasile authored
      
      
      This PR enables the `transaction_unstable_broadcast ` and
      `transaction_unstable_stop` RPC API.
      
      Since the API is unstable, we don't need to expose this in the release
      notes.
      
      After merging this, we could validate the API in subxt and stabilize it.
      
      Spec PR that stabilizes the API:
      https://github.com/paritytech/json-rpc-interface-spec/pull/139
      
      cc @paritytech/subxt-team
      
      Signed-off-by: default avatarAlexandru Vasile <[email protected]>
      923f27cc