Skip to content
  1. Feb 12, 2024
    • Serban Iorga's avatar
      Add bridge zombienet test back to the CI (#3264) · e661dc0b
      Serban Iorga authored
      Related to https://github.com/paritytech/polkadot-sdk/issues/3176
      
      This PR only adds the first bridge zombienet test back to the CI after
      fixing it, reverting
      https://github.com/paritytech/polkadot-sdk/pull/3071
      
      Credits to @svyatonik for building all the CI infrastructure around
      this.
      e661dc0b
    • Alexandru Vasile's avatar
      rpc-v2/tx: Implement `transaction_unstable_broadcast` and `transaction_unstable_stop` (#3079) · bde0bbe5
      Alexandru Vasile authored
      
      
      This PR implements the
      [transaction_unstable_broadcast](https://github.com/paritytech/json-rpc-interface-spec/blob/main/src/api/transaction_unstable_broadcast.md)
      and
      [transaction_unstable_stop](https://github.com/paritytech/json-rpc-interface-spec/blob/main/src/api/transaction_unstable_stop.md).
      
      
      The
      [transaction_unstable_broadcast](https://github.com/paritytech/json-rpc-interface-spec/blob/main/src/api/transaction_unstable_broadcast.md)
      submits the provided transaction at the best block of the chain.
      If the transaction is dropped or declared invalid, the API tries to
      resubmit the transaction at the next available best block.
      
      ### Broadcasting 
      The broadcasting operation continues until either:
      
      - the user called `transaction_unstable_stop` with the operation ID that
      identifies the broadcasting operation
      - the transaction state is one of the following: 
        - Finalized: the transaction is part of the chain
      - FinalizedTimeout: we have waited for 256 finalized blocks and timedout
        - Usurped the transaction has been replaced in the tx pool
        
      The broadcasting retires to submit the transaction when the transaction
      state is:
      - Invalid: the transaction might become valid at a later time
      - Dropped: the transaction pool's capacity is full at the moment, but
      might clear when other transactions are finalized/dropped
      
      ### Stopping
      
      The `transaction_unstable_broadcast` spawns an abortable future and
      tracks the abort handler.
      When the
      [transaction_unstable_stop](https://github.com/paritytech/json-rpc-interface-spec/blob/main/src/api/transaction_unstable_stop.md)
      is called with a valid operation ID; the abort handler of the
      corresponding `transaction_unstable_broadcast` future is called. This
      behavior ensures the broadcast future is finishes on the next polling.
      When the `transaction_unstable_stop` is called with an invalid operation
      ID, an invalid jsonrpc specific error object is returned.
      
      
      ### Testing
      
      This PR adds the testing harness of the transaction API and validates
      two basic scenarios:
      - transaction enters and exits the transaction pool
      - transaction stop returns appropriate values when called with valid and
      invalid operation IDs
      
      
      Closes: https://github.com/paritytech/polkadot-sdk/issues/3039
      
      Note that the API should be enabled after:
      https://github.com/paritytech/polkadot-sdk/issues/3084.
      
      cc @paritytech/subxt-team
      
      ---------
      
      Signed-off-by: default avatarAlexandru Vasile <[email protected]>
      Co-authored-by: default avatarSebastian Kunert <[email protected]>
      bde0bbe5
    • Dónal Murray's avatar
      Bump coretime-rococo to get leases fix (#3289) · cecdf760
      Dónal Murray authored
      Leases can be force set, but since Leases is a StorageValue, if a lease
      misses its sale rotation in which it should expire, it can never be
      cleared.
      
      This can happen if a lease is added with an until timeslice that lies in
      a region whose sale has already started or has passed, even if the
      timeslice itself hasn't passed.
      
      Trappist is currently trapped in a lease that will never end, so this
      will remove it at the next sale rotation.
      
      A fix was introduced in
      https://github.com/paritytech/polkadot-sdk/pull/3213 but this missed the
      1.7 release. This PR bumps the `coretime-rococo` version to get these
      changes on Rococo.
      cecdf760
    • Oliver Tale-Yazdi's avatar
      Lift dependencies to the workspace (Part 1) (#2070) · e80c2473
      Oliver Tale-Yazdi authored
      Changes (partial https://github.com/paritytech/polkadot-sdk/issues/994):
      - Set log to `0.4.20` everywhere
      - Lift `log` to the workspace
      
      Starting with a simpler one after seeing
      https://github.com/paritytech/polkadot-sdk/pull/2065 from @jsdw
      
      .
      This sets the `default-features` to `false` in the root and then
      overwrites that in each create to its original value. This is necessary
      since otherwise the `default` features are additive and its impossible
      to disable them in the crate again once they are enabled in the
      workspace.
      
      I am using a tool to do this, so its mostly a test to see that it works
      as expected.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      e80c2473
    • Alexandru Gheorghe's avatar
      statement-distribution: Fix CostMinor("Unexpected Statement") (#3223) · 8362a681
      Alexandru Gheorghe authored
      
      
      On grid distribution messages have two paths of reaching a node, so
      there is the possiblity of a race when two peers send each other the
      same statement around the same time. Statement local_knowledge will tell
      us that the peer should have not send the statement because we sent it
      to it.
      
      Fix it by also keeping track only of the statement we received from a
      given peer and penalize it only if it sends it to us more than once.
      
      Fixes: https://github.com/paritytech/polkadot-sdk/issues/2346
      
      Additionally, also use different Cost labels for different paths to make
      it easier to debug things.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <[email protected]>
      8362a681
    • Alexander Samusev's avatar
      [ci] Don't run prdoc and lables GHA on master (#3257) · c3489aaf
      Alexander Samusev authored
      PR adds condition to ignore master branch for prdoc and labels GHA.
      
      This option doesn't work because all PRs are for master thus the actions
      won't start:
      ```yml
      on:
        pull_request:
          branches-ignore:
            - master
      ```
      
      This option doesn't work because actions don't see the PR number and
      [break](https://github.com/paritytech/polkadot-sdk/actions/runs/7827272667/job/21354764953):
      ```yml
      on:
        push:
          branches-ignore:
            - master
      ```
      
      cc https://github.com/paritytech/ci_cd/issues/940
      cc https://github.com/paritytech/polkadot-sdk/issues/3240
      c3489aaf
    • Serban Iorga's avatar
      Bridge zombienet tests refactoring (#3260) · dfc8e469
      Serban Iorga authored
      Related to https://github.com/paritytech/polkadot-sdk/issues/3242
      
      Reorganizing the bridge zombienet tests in order to:
      - separate the environment spawning from the actual tests
      - offer better control over the tests and some possibility to
      orchestrate them as opposed to running everything from the zndsl file
      
      Only rewrote the asset transfer test using this new "framework". The old
      logic and old tests weren't functionally modified or deleted. The plan
      is to get feedback on this approach first and if this is agreed upon,
      migrate the other 2 tests later in separate PRs and also do other
      improvements later.
      dfc8e469
    • Alexandru Vasile's avatar
      transaction-pool: Improve transaction status documentation and add helpers (#3215) · 4f13d5b7
      Alexandru Vasile authored
      
      
      This PR improves the transaction status documentation.
      - Added doc references for describing the main states
      - Extra comment wrt pool ready / future queues
      - `FinalityTimeout` no longer describes a lagging finality gadget, it
      signals that the maximum number of finality gadgets has been reached
      
      A few helper methods are added to indicate when:
      - a final event is generated by the transaction pool for a given event
      - a final event is provided, although the transaction might become valid
      at a later time and could be re-submitted
      
      The helper methods are used and taken from
      https://github.com/paritytech/polkadot-sdk/pull/3079 to help us better
      keep it in sync.
      
      
      cc @paritytech/subxt-team
      
      ---------
      
      Signed-off-by: default avatarAlexandru Vasile <[email protected]>
      4f13d5b7
    • Andrei Eres's avatar
      subsystem-bench: polish imports (#3262) · cbd68467
      Andrei Eres authored
      cbd68467
  2. Feb 11, 2024
    • maksimryndin's avatar
      refactor pvf security module (#3047) · 4883e144
      maksimryndin authored
      
      
      resolve https://github.com/paritytech/polkadot-sdk/issues/2321
      
      - [x] refactor `security` module into a conditionally compiled
      - [x] rename `amd64` into x86-64 for consistency with conditional
      compilation guards and remove reference to a particular vendor
      - [x] run unit tests and zombienet
      
      ---------
      
      Co-authored-by: default avatars0me0ne-unkn0wn <[email protected]>
      4883e144
  3. Feb 09, 2024
    • Eugen Snitko's avatar
      Add forklift to remaining jobs (#3236) · edd95b37
      Eugen Snitko authored
      Add [forklift
      caching](https://gitlab.parity.io/parity/infrastructure/ci_cd/forklift/forklift)
      to remainig jobs
      
      by .sh and .py scripts:
      - cargo-check-each-crate x6 (`.gitlab/check-each-crate.py`)
      - build-linux-stable (`polkadot/scripts/build-only-wasm.sh`)
      
      by before_script:
      - build-linux-substrate
      - build-subkey-linux (with `.build-subkey` job)
      - cargo-check-benches x2
      
      **To disable feature set FORKLIFT_BYPASS variable to true in [project
      settings in
      gitlab](https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/settings/ci_cd)**
      (forklift now handles FORKLIFT_BYPASS by itself)
      edd95b37
    • Egor_P's avatar
      [Backport] Version bumps from 1.7.0 release (#3254) · b2c81b58
      Egor_P authored
      This PR backports version bumps from `1.7.0` release branch and moves
      related prdoc files to the appropriate folder.
      b2c81b58
  4. Feb 08, 2024
    • Oliver Tale-Yazdi's avatar
      [FRAME] Parameters pallet (#2061) · e53ebd8c
      Oliver Tale-Yazdi authored
      
      
      Closes #169  
      
      Fork of the `orml-parameters-pallet` as introduced by
      https://github.com/open-web3-stack/open-runtime-module-library/pull/927
      (cc @xlc)
      It greatly changes how the macros work, but keeps the pallet the same.
      The downside of my code is now that it does only support constant keys
      in the form of types, not value-bearing keys.
      I think this is an acceptable trade off, give that it can be used by
      *any* pallet without any changes.
      
      The pallet allows to dynamically set parameters that can be used in
      pallet configs while also restricting the updating on a per-key basis.
      The rust-docs contains a complete example.
      
      Changes:
      - Add `parameters-pallet`
      - Use in the kitchensink as demonstration
      - Add experimental attribute to define dynamic params in the runtime.
      - Adding a bunch of traits to `frame_support::traits::dynamic_params`
      that can be re-used by the ORML macros
      
      ## Example
      
      First to define the parameters in the runtime file. The syntax is very
      explicit about the codec index and errors if there is no.
      ```rust
      #[dynamic_params(RuntimeParameters, pallet_parameters::Parameters::<Runtime>))]
      pub mod dynamic_params {
      	use super::*;
      
      	#[dynamic_pallet_params]
      	#[codec(index = 0)]
      	pub mod storage {
      		/// Configures the base deposit of storing some data.
      		#[codec(index = 0)]
      		pub static BaseDeposit: Balance = 1 * DOLLARS;
      
      		/// Configures the per-byte deposit of storing some data.
      		#[codec(index = 1)]
      		pub static ByteDeposit: Balance = 1 * CENTS;
      	}
      
      	#[dynamic_pallet_params]
      	#[codec(index = 1)]
      	pub mod contracts {
      		#[codec(index = 0)]
      		pub static DepositPerItem: Balance = deposit(1, 0);
      
      		#[codec(index = 1)]
      		pub static DepositPerByte: Balance = deposit(0, 1);
      	}
      }
      ```
      
      Then the pallet is configured with the aggregate:  
      ```rust
      impl pallet_parameters::Config for Runtime {
      	type AggregratedKeyValue = RuntimeParameters;
      	type AdminOrigin = EnsureRootWithSuccess<AccountId, ConstBool<true>>;
      	...
      }
      ```
      
      And then the parameters can be used in a pallet config:
      ```rust
      impl pallet_preimage::Config for Runtime {
      	type DepositBase = dynamic_params::storage::DepositBase;
      }
      ```
      
      A custom origin an be defined like this:  
      ```rust
      pub struct DynamicParametersManagerOrigin;
      
      impl EnsureOriginWithArg<RuntimeOrigin, RuntimeParametersKey> for DynamicParametersManagerOrigin {
      	type Success = ();
      
      	fn try_origin(
      		origin: RuntimeOrigin,
      		key: &RuntimeParametersKey,
      	) -> Result<Self::Success, RuntimeOrigin> {
      		match key {
      			RuntimeParametersKey::Storage(_) => {
      				frame_system::ensure_root(origin.clone()).map_err(|_| origin)?;
      				return Ok(())
      			},
      			RuntimeParametersKey::Contract(_) => {
      				frame_system::ensure_root(origin.clone()).map_err(|_| origin)?;
      				return Ok(())
      			},
      		}
      	}
      
      	#[cfg(feature = "runtime-benchmarks")]
      	fn try_successful_origin(_key: &RuntimeParametersKey) -> Result<RuntimeOrigin, ()> {
      		Ok(RuntimeOrigin::Root)
      	}
      }
      ```
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      Co-authored-by: default avatarNikhil Gupta <[email protected]>
      Co-authored-by: default avatarKian Paimani <[email protected]>
      Co-authored-by: command-bot <>
      e53ebd8c
    • Gonçalo Pestana's avatar
      Fixes `TotalValueLocked` out of sync in nomination pools (#3052) · aac07af0
      Gonçalo Pestana authored
      The `TotalLockedValue` storage value in nomination pools pallet may get
      out of sync if the staking pallet does implicit withdrawal of unlocking
      chunks belonging to a bonded pool stash. This fix is based on a new
      method in the `OnStakingUpdate` traits, `on_withdraw`, which allows the
      nomination pools pallet to adjust the `TotalLockedValue` every time
      there is an implicit or explicit withdrawal from a bonded pool's stash.
      
      This PR also adds a migration that checks and updates the on-chain TVL
      if it got out of sync due to the bug this PR fixes.
      
      **Changes to `trait OnStakingUpdate`**
      
      In order for staking to notify the nomination pools pallet that chunks
      where withdrew, we add a new method, `on_withdraw` to the
      `OnStakingUpdate` trait. The nomination pools pallet filters the
      withdraws that are related to bonded pool accounts and updates the
      `TotalValueLocked` accordingly.
      
      **Others**
      - Adds try-state checks to the EPM/staking e2e tests
      - Adds tests for auto withdrawing in the context of nomination pools
      
      **To-do**
      - [x] check if we need a migration to fix the current `TotalValueLocked`
      (run try-runtime)
      - [x] migrations to fix the current on-chain TVL value 
      
        **Kusama**:
      ```
      TotalValueLocked: 99.4559 kKSM
      TotalValueLocked (calculated) 99.4559 kKSM
      ```
      
      
      ️ **Westend**:
      ```
      TotalValueLocked: 18.4060 kWND
      TotalValueLocked (calculated) 18.4050 kWND
      ```
      **Polkadot**: TVL not released yet.
      
      Closes https://github.com/paritytech/polkadot-sdk/issues/3055
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarRoss Bulat <[email protected]>
      Co-authored-by: default avatarDónal Murray <[email protected]>
      aac07af0
    • Oliver Tale-Yazdi's avatar
      `bench pallet`: only require `Hash` instead of `Block` (#3244) · c36c51ca
      Oliver Tale-Yazdi authored
      
      
      Preparation for https://github.com/paritytech/polkadot-sdk/issues/2664
      
      Changes:
      - Only require `Hash` instead of `Block` for the benchmarking
      - Refactor DB types to do the same
      
      ## Integration
      
      This breaking change can easily be integrated into your node via:  
      ```patch
      - cmd.run::<Block, ()>(config)
      + cmd.run::<HashingFor<Block>, ()>(config)
      ```
      
      Status: waiting for CI checks
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      Co-authored-by: default avatarcheme <[email protected]>
      c36c51ca
    • Radha's avatar
      a2e6256c
    • drskalman's avatar
      Make BEEFY client keystore generic over BEEFY `AuthorityId` type (#2258) · 0a94124d
      drskalman authored
      
      
      This is the significant step to make BEEFY client able to handle both
      ECDSA and (ECDSA, BLS) type signature. The idea is having BEEFY Client
      generic on crypto types makes migration to new types smoother.
      
      This makes the BEEFY Keystore generic over AuthorityId and extends its
      tests to cover the case when the AuthorityId is of type (ECDSA,
      BLS12-377)
      
      ---------
      
      Co-authored-by: default avatarDavide Galassi <[email protected]>
      Co-authored-by: default avatarRobert Hambrock <[email protected]>
      0a94124d
    • PG Herveou's avatar
      Contracts update doc.rs metadata (#3241) · bc5a758c
      PG Herveou authored
      
      
      Adding Rust metadata for doc
      see https://docs.rs/about/metadata
      
      ---------
      
      Co-authored-by: default avatarAlexander Theißen <[email protected]>
      bc5a758c
    • Alexander Theißen's avatar
      contracts: Remove no longer enforced limits from the `Schedule` (#3184) · d54412ce
      Alexander Theißen authored
      When switching from the instrumented gas metering to the wasmi gas
      metering we also removed all imposed limits regarding Wasm module
      internals. All those things do not interact with the host and have to be
      handled by wasmi. For example, Wasmi charges additional gas for
      parameters to each function because as they incur some overhead.
      
      Back then we took the opportunity to remove the dependency on the
      deprecated `parity-wasm` which was used to enforce those limits.
      
      This PR merely removes them from the `Schedule` they aren't enforced for
      a while.
      d54412ce
    • Alexander Theißen's avatar
      contracts: Remove unused benchmarks (#3185) · 7fa05518
      Alexander Theißen authored
      Those were used for some adhoc comparison of solang vs ink! with regards
      to ERC20 transfers. Not been used for a while.
      
      Benchmarking is done here now:
      [smart-bench](https://github.com/paritytech/smart-bench): Weight based
      benchmark to test how much transaction actually fit into a block with
      the current Weights
      [schlau](https://github.com/ascjones/schlau): Time based benchmarks to
      compare performance
      7fa05518
    • Alexander Theißen's avatar
      contracts: Don't fail fast if the `Weight` limit of a cross contract call is too big (#3243) · 28463a12
      Alexander Theißen authored
      
      
      When doing a cross contract call you can supply an optional Weight limit
      for that call. If one doesn't specify the limit (setting it to 0) the
      sub call will have all the remaining gas available. If one does specify
      the limit we subtract that amount eagerly from the Weight meter and fail
      fast if not enough `Weight` is available.
      
      This is quite annoying because setting a fixed limit will set the
      `gas_required` in the gas estimation according to the specified limit.
      Even if in that dry-run the actual call didn't consume that whole
      amount. It effectively discards the more precise measurement it should
      have from the dry-run.
      
      This PR changes the behaviour so that the supplied limit is an actual
      limit: We do the cross contract call even if the limit is higher than
      the remaining `Weight`. We then fail and roll back in the cub call in
      case there is not enough weight.
      
      This makes the weight estimation in the dry-run no longer dependent on
      the weight limit supplied when doing a cross contract call.
      
      ---------
      
      Co-authored-by: default avatarPG Herveou <[email protected]>
      28463a12
    • dharjeezy's avatar
      Try State Hook for Ranked Collective (#3007) · 9cd02a07
      dharjeezy authored
      
      
      Part of: paritytech/polkadot-sdk#239
      
      Polkadot address: 12GyGD3QhT4i2JJpNzvMf96sxxBLWymz4RdGCxRH5Rj5agKW
      
      ---------
      
      Co-authored-by: default avatarLiam Aharon <[email protected]>
      9cd02a07
    • Andrei Eres's avatar
      subsystem-bench: run cli benchmarks only using config files (#3239) · 07f85929
      Andrei Eres authored
      This PR removes the configuration of subsystem benchmarks via CLI
      arguments. After this, we only keep configurations only in yaml files.
      It removes unnecessary code duplication
      07f85929
    • Louis Merlin's avatar
      Add try_state and integrity_test to XCM simulator fuzzer (#3222) · 84d89e37
      Louis Merlin authored
      This adds `try_state()` and `integrity_test()` to the four runtimes of
      the XCM-simulator fuzzer.
      
      With this, we are able to stress-test [message-queue's
      try_state](https://github.com/paritytech/polkadot-sdk/blob/7df1ae3b/substrate/frame/message-queue/src/lib.rs#L1245-L1347).
      
      This also adds the `Transact` block-listing from #2424 to avoid
      false-positives.
      
      Thank you @ggwpez for the help with the runtime configurations.
      84d89e37
    • Dónal Murray's avatar
      [pallet_broker] Remove leases that have already expired in rotate_sale (#3213) · 2ea6bcf1
      Dónal Murray authored
      Leases can be force set, but since `Leases` is a `StorageValue`, if a
      lease misses its sale rotation in which it should expire, it can never
      be cleared.
      
      This can happen if a lease is added with an `until` timeslice that lies
      in a region whose sale has already started or has passed, even if the
      timeslice itself hasn't passed.
      
      This solves that issue in a minimal way, with all expired leases being
      cleaned up in each sale rotation, not just the ones that are expiring in
      the coming region.
      
      TODO:
      - [x] Write test
      2ea6bcf1
    • Alexander Samusev's avatar
      [ci] Remove path from check-workspace GHA trigger (#3255) · 2556e33f
      Alexander Samusev authored
      In order to make the action `Required` it should run always.
      
      cc @ggwpez
      2556e33f
  5. Feb 06, 2024
    • Andrei Eres's avatar
      subsystem-bench: Prepare CI output (#3158) · 9e6298e7
      Andrei Eres authored
      
      
      1. Benchmark results are collected in a single struct.
      2. The output of the results is prettified.
      3. The result struct used to save the output as a yaml and store it in
      artifacts in a CI job.
      
      ```
      $ cargo run -p polkadot-subsystem-bench --release -- test-sequence --path polkadot/node/subsystem-bench/examples/availability_read.yaml | tee output.txt
      $ cat output.txt
      
      polkadot/node/subsystem-bench/examples/availability_read.yaml #1
      
      Network usage, KiB                     total   per block
      Received from peers               510796.000  170265.333
      Sent to peers                        221.000      73.667
      
      CPU usage, s                           total   per block
      availability-recovery                 38.671      12.890
      Test environment                       0.255       0.085
      
      
      polkadot/node/subsystem-bench/examples/availability_read.yaml #2
      
      Network usage, KiB                     total   per block
      Received from peers               413633.000  137877.667
      Sent to peers                        353.000     117.667
      
      CPU usage, s                           total   per block
      availability-recovery                 52.630      17.543
      Test environment                       0.271       0.090
      
      
      polkadot/node/subsystem-bench/examples/availability_read.yaml #3
      
      Network usage, KiB                     total   per block
      Received from peers               424379.000  141459.667
      Sent to peers                        703.000     234.333
      
      CPU usage, s                           total   per block
      availability-recovery                 51.128      17.043
      Test environment                       0.502       0.167
      
      ```
      
      ```
      $ cargo run -p polkadot-subsystem-bench --release -- --ci test-sequence --path polkadot/node/subsystem-bench/examples/availability_read.yaml | tee output.txt
      $ cat output.txt
      - benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #1'
        network:
        - resource: Received from peers
          total: 509011.0
          per_block: 169670.33333333334
        - resource: Sent to peers
          total: 220.0
          per_block: 73.33333333333333
        cpu:
        - resource: availability-recovery
          total: 31.845848445
          per_block: 10.615282815
        - resource: Test environment
          total: 0.23582828799999941
          per_block: 0.07860942933333313
      
      - benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #2'
        network:
        - resource: Received from peers
          total: 411738.0
          per_block: 137246.0
        - resource: Sent to peers
          total: 351.0
          per_block: 117.0
        cpu:
        - resource: availability-recovery
          total: 18.93596025099999
          per_block: 6.31198675033333
        - resource: Test environment
          total: 0.2541994199999979
          per_block: 0.0847331399999993
      
      - benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #3'
        network:
        - resource: Received from peers
          total: 424548.0
          per_block: 141516.0
        - resource: Sent to peers
          total: 703.0
          per_block: 234.33333333333334
        cpu:
        - resource: availability-recovery
          total: 16.54178526900001
          per_block: 5.513928423000003
        - resource: Test environment
          total: 0.43960946299999537
          per_block: 0.14653648766666513
      ```
      
      ---------
      
      Co-authored-by: default avatarAndrei Sandu <[email protected]>
      9e6298e7
    • Branislav Kontur's avatar
      [pallet_xcm] Forgotten migration to XCMv4 + added `try-state` to the `pallet_xcm` (#3228) · 8c1c99f0
      Branislav Kontur authored
      Relates to: https://github.com/paritytech/polkadot-sdk/issues/3214
      
      ## TODO
      
      - [ ] backport to the `1.7.0` release
      8c1c99f0
    • Koute's avatar
      Build more runtimes targeting PolkaVM (#3209) · 402b64ca
      Koute authored
      This PR improves compatibility with RISC-V and PolkaVM, allowing more
      runtimes to successfully compile.
      
      In particular, it makes the following changes:
      
      - The `sp-mmr-primitives` and `sp-consensus-beefy` crates
      unconditionally required an `std`-only dependency; now they only require
      those dependencies when the `std` feature is actually enabled. (Our
      RISC-V target is, unlike WASM, a true `no_std` target where you can't
      accidentally use stuff from `std` anymore.)
      - One of our dependencies (the `bitvec` trace) uses a crate called
      `radium` which doesn't compile under RISC-V due to incomplete
      autodetection logic in their `build.rs` file. The good news is that this
      is already fixed in the newest upstream version of `radium`, and the
      newest version of `bitvec` uses it. The bad news is that the newest
      version of `bitvec` is not currently released on crates.io, so we can't
      use it. I've [created an
      issue](https://github.com/ferrilab/ferrilab/issues/5) asking for a new
      release, but in the meantime I forked the currently used `radium` 0.7,
      [fixed the faulty
      logic](https://github.com/paritytech/radium-0.7-fork/commit/ed66c8a294b138c67f93499644051d97d4c7fbda)
      and used cargo's patching capabilities to use it for the RISC-V runtime
      builds. This might be a little hacky, but it is the least intrusive way
      to fix the problem, doesn't affect WASM builds at all, and we can
      trivially remove it once a new `bitvec` is released.
      - The new runtimes are added to the CI to make sure their compilation
      doesn't break.
      402b64ca
    • Svyatoslav Nikolsky's avatar
      Introduce submit_finality_proof_ex call to bridges GRANDPA pallet (#3225) · a4622071
      Svyatoslav Nikolsky authored
      backport of
      https://github.com/paritytech/parity-bridges-common/pull/2821 (see
      detailed description there)
      a4622071
    • Squirrel's avatar
      sp-std -> core (#3199) · bc2e5e1f
      Squirrel authored
      First in a series of PRs that reduces our use of sp-std with a view to
      deprecating it.
      
      This is just looking at /substrate and moving some of the references
      from `sp-std` to `core`.
      These particular changes should be uncontroversial.
      
      Where macros are used `::core` should be used to remove any ambiguity.
      
      part of https://github.com/paritytech/polkadot-sdk/issues/2101
      bc2e5e1f
    • Oliver Tale-Yazdi's avatar
      Ranked collective `Add`+`Remove` origins (#3212) · c552fb54
      Oliver Tale-Yazdi authored
      
      
      Superseeds https://github.com/paritytech/polkadot-sdk/pull/1245  
      
      This PR is a migration of the
      https://github.com/paritytech/substrate/pull/14577.
      
      The PR added associated types (`AddOrigin` & `RemoveOrigin`) to
      `Config`. It allows you to decouple types and areas of responsibility,
      since at the moment the same types are responsible for adding and
      promoting(removing and demoting). This will improve the flexibility of
      the pallet configuration.
      
      ```
      /// The origin required to add a member.
      type AddOrigin: EnsureOrigin<Self::RuntimeOrigin, Success = ()>;
      
      /// The origin required to remove a member. The success value indicates the
      /// maximum rank *from which* the removal may be.
      type RemoveOrigin: EnsureOrigin<Self::RuntimeOrigin, Success = Rank>;
      ```
      To achieve the backward compatibility, the users of the pallet can use
      the old type via the new morph:
      
      ```
      type AddOrigin = MapSuccess<Self::PromoteOrigin, Ignore>;
      type RemoveOrigin = Self::DemoteOrigin;
      ```
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      Co-authored-by: default avatarPraetorP <[email protected]>
      Co-authored-by: default avatarPavel Orlov <[email protected]>
      c552fb54
    • Alin Dima's avatar
      prospective-parachains: respond with multiple backable candidates (#3160) · 7df1ae3b
      Alin Dima authored
      Fixes https://github.com/paritytech/polkadot-sdk/issues/3129
      7df1ae3b
  6. Feb 05, 2024
    • dependabot[bot]'s avatar
      Bump indicatif from 0.17.6 to 0.17.7 (#3200) · 53f615de
      dependabot[bot] authored
      
      
      Bumps [indicatif](https://github.com/console-rs/indicatif) from 0.17.6
      to 0.17.7.
      <details>
      <summary>Commits</summary>
      <ul>
      <li><a
      href="https://github.com/console-rs/indicatif/commit/0c037edc86449d84aa457d7d5db80b4166c18d6b"><code>0c037ed</code></a>
      Bump version to 0.17.7 (<a
      href="https://redirect.github.com/console-rs/indicatif/issues/589">#589</a>)</li>
      <li><a
      href="https://github.com/console-rs/indicatif/commit/44610121c8c0343428c12992a5bbf21255d4120b"><code>4461012</code></a>
      Fix attempt to subtract with overflow (<a
      href="https://redirect.github.com/console-rs/indicatif/issues/582">#582</a>)
      (<a
      href="https://redirect.github.com/console-rs/indicatif/issues/586">#586</a>)</li>
      <li><a
      href="https://github.com/console-rs/indicatif/commit/257d3ecc39f60a366bde98c11c4c703f91d53347"><code>257d3ec</code></a>
      Bump actions/checkout from 3 to 4</li>
      <li><a
      href="https://github.com/console-rs/indicatif/commit/40b40d29b6d06ae18c40b829f77e9c43bcebd7af"><code>40b40d2</code></a>
      fix unnecessary vec! lint instances</li>
      <li><a
      href="https://github.com/console-rs/indicatif/commit/a5a8524b4a62ac97229df5beeeff55c928e051fe"><code>a5a8524</code></a>
      Tick ProgressTrackers before drawing</li>
      <li><a
      href="https://github.com/console-rs/indicatif/commit/75fca29bdb9e1164092d2b40d46d9b9c3d9581f1"><code>75fca29</code></a>
      Add scheduled CI runs every week</li>
      <li><a
      href="https://github.com/console-rs/indicatif/commit/c0ea468ac3bd7ab9abab86a3fd3b251f7cef83b8"><code>c0ea468</code></a>
      Upgrade to 2021 edition</li>
      <li><a
      href="https://github.com/console-rs/indicatif/commit/73a67f8e517c64f919ce51ce62e7c5bf3cb95974"><code>73a67f8</code></a>
      Bump MSRV to 1.63 for tokio 1.30</li>
      <li><a
      href="https://github.com/console-rs/indicatif/commit/de090172485c7638a016b984e0c7c54e40919d34"><code>de09017</code></a>
      Reorder Cargo metadata fields</li>
      <li><a
      href="https://github.com/console-rs/indicatif/commit/cee6fd4fcf85c4eda3a6e5dfb555f8a9c6c62edd"><code>cee6fd4</code></a>
      Fix a potential overflow with a saturating add.</li>
      <li>Additional commits viewable in <a
      href="https://github.com/console-rs/indicatif/compare/0.17.6...0.17.7">compare
      view</a></li>
      </ul>
      </details>
      <br />
      
      
      [![Dependabot compatibility
      score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=indicatif&package-manager=cargo&previous-version=0.17.6&new-version=0.17.7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
      
      Dependabot will resolve any conflicts with this PR as long as you don't
      alter it yourself. You can also trigger a rebase manually by commenting
      `@dependabot rebase`.
      
      [//]: # (dependabot-automerge-start)
      [//]: # (dependabot-automerge-end)
      
      ---
      
      <details>
      <summary>Dependabot commands and options</summary>
      <br />
      
      You can trigger Dependabot actions by commenting on this PR:
      - `@dependabot rebase` will rebase this PR
      - `@dependabot recreate` will recreate this PR, overwriting any edits
      that have been made to it
      - `@dependabot merge` will merge this PR after your CI passes on it
      - `@dependabot squash and merge` will squash and merge this PR after
      your CI passes on it
      - `@dependabot cancel merge` will cancel a previously requested merge
      and block automerging
      - `@dependabot reopen` will reopen this PR if it is closed
      - `@dependabot close` will close this PR and stop Dependabot recreating
      it. You can achieve the same result by closing it manually
      - `@dependabot show <dependency name> ignore conditions` will show all
      of the ignore conditions of the specified dependency
      - `@dependabot ignore <dependency name> major version` will close this
      group update PR and stop Dependabot creating any more for the specific
      dependency's major version (unless you unignore this specific
      dependency's major version or upgrade to it yourself)
      - `@dependabot ignore <dependency name> minor version` will close this
      group update PR and stop Dependabot creating any more for the specific
      dependency's minor version (unless you unignore this specific
      dependency's minor version or upgrade to it yourself)
      - `@dependabot ignore <dependency name>` will close this group update PR
      and stop Dependabot creating any more for the specific dependency
      (unless you unignore this specific dependency or upgrade to it yourself)
      - `@dependabot unignore <dependency name>` will remove all of the ignore
      conditions of the specified dependency
      - `@dependabot unignore <dependency name> <ignore condition>` will
      remove the ignore condition of the specified dependency and ignore
      conditions
      
      
      </details>
      
      Signed-off-by: default avatardependabot[bot] <[email protected]>
      Co-authored-by: default avatardependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
      53f615de
    • Alexandru Gheorghe's avatar
      Introduce approval-voting/distribution benchmark (#2621) · f9f88688
      Alexandru Gheorghe authored
      
      
      ## Summary
      Built on top of the tooling and ideas introduced in
      https://github.com/paritytech/polkadot-sdk/pull/2528, this PR introduces
      a synthetic benchmark for measuring and assessing the performance
      characteristics of the approval-voting and approval-distribution
      subsystems.
      
      Currently this allows, us to simulate the behaviours of these systems
      based on the following dimensions:
      ```
      TestConfiguration:
      # Test 1
      - objective: !ApprovalsTest
          last_considered_tranche: 89
          min_coalesce: 1
          max_coalesce: 6
          enable_assignments_v2: true
          send_till_tranche: 60
          stop_when_approved: false
          coalesce_tranche_diff: 12
          workdir_prefix: "/tmp"
          num_no_shows_per_candidate: 0
          approval_distribution_expected_tof: 6.0
          approval_distribution_cpu_ms: 3.0
          approval_voting_cpu_ms: 4.30
        n_validators: 500
        n_cores: 100
        n_included_candidates: 100
        min_pov_size: 1120
        max_pov_size: 5120
        peer_bandwidth: 524288000000
        bandwidth: 524288000000
        latency:
          min_latency:
            secs: 0
            nanos: 1000000
          max_latency:
            secs: 0
            nanos: 100000000
        error: 0
        num_blocks: 10
      ```
      
      ## The approach
      1. We build a real overseer with the real implementations for
      approval-voting and approval-distribution subsystems.
      2. For a given network size, for each validator we pre-computed all
      potential assignments and approvals it would send, because this a
      computation heavy operation this will be cached on a file on disk and be
      re-used if the generation parameters don't change.
      3. The messages will be sent accordingly to the configured parameters
      and those are split into 3 main benchmarking scenarios.
      
      ## Benchmarking scenarios
      
      ### Best case scenario *approvals_throughput_best_case.yaml*
      It send to the approval-distribution only the minimum required tranche
      to gathered the needed_approvals, so that a candidate is approved.
      
      ### Behaviour in the presence of no-shows *approvals_no_shows.yaml*
      It sends the tranche needed to approve a candidate when we have a
      maximum of *num_no_shows_per_candidate* tranches with no-shows for each
      candidate.
      
      ### Maximum throughput *approvals_throughput.yaml*
      It sends all the tranches for each block and measures the used CPU and
      necessary network bandwidth. by the approval-voting and
      approval-distribution subsystem.
      
      ## How to run it
      ```
      cargo run -p polkadot-subsystem-bench --release -- test-sequence --path polkadot/node/subsystem-bench/examples/approvals_throughput.yaml
      ```
      
      ## Evaluating performance
      ### Use the real subsystems metrics
      If you follow the steps in
      https://github.com/paritytech/polkadot-sdk/tree/master/polkadot/node/subsystem-bench#install-grafana
      for installing locally prometheus and grafana, all real metrics for the
      `approval-distribution`, `approval-voting` and overseer are available.
      E.g:
      <img width="2149" alt="Screenshot 2023-12-05 at 11 07 46"
      src="https://github.com/paritytech/polkadot-sdk/assets/49718502/cb8ae2dd-178b-4922-bfa4-dc37e572ed38">
      
      <img width="2551" alt="Screenshot 2023-12-05 at 11 09 42"
      src="https://github.com/paritytech/polkadot-sdk/assets/49718502/8b4542ba-88b9-46f9-9b70-cc345366081b">
      
      <img width="2154" alt="Screenshot 2023-12-05 at 11 10 15"
      src="https://github.com/paritytech/polkadot-sdk/assets/49718502/b8874d8d-632e-443a-9840-14ad8e90c54f">
      
      <img width="2535" alt="Screenshot 2023-12-05 at 11 10 52"
      src="https://github.com/paritytech/polkadot-sdk/assets/49718502/779a439f-fd18-4985-bb80-85d5afad78e2">
      
      ### Profile with pyroscope
      1. Setup pyroscope following the steps in
      https://github.com/paritytech/polkadot-sdk/tree/master/polkadot/node/subsystem-bench#install-pyroscope,
      then run any of the benchmark scenario with `--profile` as the
      arguments.
      2. Open the pyroscope dashboard in grafana, e.g:
      <img width="2544" alt="Screenshot 2024-01-09 at 17 09 58"
      src="https://github.com/paritytech/polkadot-sdk/assets/49718502/58f50c99-a910-4d20-951a-8b16639303d9">
      
      
      
      ### Useful  logs
      1. Network bandwidth requirements:
      ```
      Payload bytes received from peers: 503993 KiB total, 50399 KiB/block
      Payload bytes sent to peers: 629971 KiB total, 62997 KiB/block
      ```
      
      2. Cpu usage by the approval-distribution/approval-voting subsystems.
      ```
      approval-distribution CPU usage 84.061s
      approval-distribution CPU usage per block 8.406s
      approval-voting CPU usage 96.532s
      approval-voting CPU usage per block 9.653s
      ```
      
      3. Time passed until a given block is approved
      ```
       Chain selection approved  after 3500 ms hash=0x0101010101010101010101010101010101010101010101010101010101010101
      Chain selection approved  after 4500 ms hash=0x0202020202020202020202020202020202020202020202020202020202020202
      ```
      
      ### Using benchmark to quantify improvements from
      https://github.com/paritytech/polkadot-sdk/pull/1178 +
      https://github.com/paritytech/polkadot-sdk/pull/1191
      
      Using a versi-node we compare the scenarios where all new optimisations
      are disabled with a scenarios where tranche0 assignments are sent in a
      single message and a conservative simulation where the coalescing of
      approvals gives us just 50% reduction in the number of messages we send.
      
      Overall, what we see is a speedup of around 30-40% in the time it takes
      to process the necessary messages and a 30-40% reduction in the
      necessary bandwidth.
      
      #### Best case scenario comparison(minimum required tranches sent).
      Unoptimised
      ```
          Number of blocks: 10
          Payload bytes received from peers: 53289 KiB total, 5328 KiB/block
          Payload bytes sent to peers: 52489 KiB total, 5248 KiB/block
          approval-distribution CPU usage 6.732s
          approval-distribution CPU usage per block 0.673s
          approval-voting CPU usage 9.523s
          approval-voting CPU usage per block 0.952s
      ```
      
      vs Optimisation enabled
      ```
         Number of blocks: 10
         Payload bytes received from peers: 32141 KiB total, 3214 KiB/block
         Payload bytes sent to peers: 37314 KiB total, 3731 KiB/block
         approval-distribution CPU usage 4.658s
         approval-distribution CPU usage per block 0.466s
         approval-voting CPU usage 6.236s
         approval-voting CPU usage per block 0.624s
      ```
      
      #### Worst case all tranches sent, very unlikely happens when sharding
      breaks.
      
      Unoptimised
      ```
         Number of blocks: 10
         Payload bytes received from peers: 746393 KiB total, 74639 KiB/block
         Payload bytes sent to peers: 729151 KiB total, 72915 KiB/block
         approval-distribution CPU usage 118.681s
         approval-distribution CPU usage per block 11.868s
         approval-voting CPU usage 124.118s
         approval-voting CPU usage per block 12.412s
      ```
      
      vs optimised
      ```
          Number of blocks: 10
          Payload bytes received from peers: 503993 KiB total, 50399 KiB/block
          Payload bytes sent to peers: 629971 KiB total, 62997 KiB/block
          approval-distribution CPU usage 84.061s
          approval-distribution CPU usage per block 8.406s
          approval-voting CPU usage 96.532s
          approval-voting CPU usage per block 9.653s
      ```
      
      
      ## TODOs
      [x] Polish implementation.
      [x] Use what we have so far to evaluate
      https://github.com/paritytech/polkadot-sdk/pull/1191 before merging.
      [x] List of features and additional dimensions we want to use for
      benchmarking.
      [x] Run benchmark on hardware similar with versi and kusama nodes.
      [ ] Add benchmark to be run in CI for catching regression in
      performance.
      [ ] Rebase on latest changes for network emulation.
      
      ---------
      
      Signed-off-by: default avatarAndrei Sandu <[email protected]>
      Signed-off-by: default avatarAlexandru Gheorghe <[email protected]>
      Co-authored-by: default avatarAndrei Sandu <[email protected]>
      Co-authored-by: default avatarAndrei Sandu <[email protected]>
      f9f88688
    • dependabot[bot]'s avatar
      Bump paritytech/review-bot from 2.3.0 to 2.4.0 (#3119) · 90849b66
      dependabot[bot] authored
      
      
      Bumps [paritytech/review-bot](https://github.com/paritytech/review-bot)
      from 2.3.0 to 2.4.0.
      <details>
      <summary>Release notes</summary>
      <p><em>Sourced from <a
      href="https://github.com/paritytech/review-bot/releases">paritytech/review-bot's
      releases</a>.</em></p>
      <blockquote>
      <h2>v2.4.0</h2>
      <h2>What's Changed</h2>
      <ul>
      <li>Updated node and dependencies by <a
      href="https://github.com/Bullrich"><code>@​Bullrich</code></a> in <a
      href="https://redirect.github.com/paritytech/review-bot/pull/111">paritytech/review-bot#111</a></li>
      <li>Refactor of failed reviews objects by <a
      href="https://github.com/Bullrich"><code>@​Bullrich</code></a> in <a
      href="https://redirect.github.com/paritytech/review-bot/pull/112">paritytech/review-bot#112</a></li>
      <li>Added required score grouping to fellows reviews by <a
      href="https://github.com/Bullrich"><code>@​Bullrich</code></a> in <a
      href="https://redirect.github.com/paritytech/review-bot/pull/113">paritytech/review-bot#113</a></li>
      </ul>
      <p><strong>Full Changelog</strong>: <a
      href="https://github.com/paritytech/review-bot/compare/v2.3.1...v2.4.0">https://github.com/paritytech/review-bot/compare/v2.3.1...v2.4.0</a></p>
      <h2>v2.3.1</h2>
      <h2>What's Changed</h2>
      <ul>
      <li>Fellows: Added search of super identity by <a
      href="https://github.com/Bullrich"><code>@​Bullrich</code></a> in <a
      href="https://redirect.github.com/paritytech/review-bot/pull/108">paritytech/review-bot#108</a></li>
      </ul>
      <p><strong>Full Changelog</strong>: <a
      href="https://github.com/paritytech/review-bot/compare/v2.3.0...v2.3.1">https://github.com/paritytech/review-bot/compare/v2.3.0...v2.3.1</a></p>
      </blockquote>
      </details>
      <details>
      <summary>Commits</summary>
      <ul>
      <li><a
      href="https://github.com/paritytech/review-bot/commit/280018363a131c5b59fcdb9c6f1a85a50bdb31e9"><code>2800183</code></a>
      Added required score grouping to fellows reviews (<a
      href="https://redirect.github.com/paritytech/review-bot/issues/113">#113</a>)</li>
      <li><a
      href="https://github.com/paritytech/review-bot/commit/53718073fee3b3af31a9d1bba9c900d989699e55"><code>5371807</code></a>
      Refactor of failed reviews objects (<a
      href="https://redirect.github.com/paritytech/review-bot/issues/112">#112</a>)</li>
      <li><a
      href="https://github.com/paritytech/review-bot/commit/5f58f48fed7d174d382989940cbbf47dbd5ee41e"><code>5f58f48</code></a>
      Updated node and dependencies (<a
      href="https://redirect.github.com/paritytech/review-bot/issues/111">#111</a>)</li>
      <li><a
      href="https://github.com/paritytech/review-bot/commit/4ea00441b921cfdbbd47dcd95f1f6e3e0ab0b691"><code>4ea0044</code></a>
      Fellows: Added search of super identity (<a
      href="https://redirect.github.com/paritytech/review-bot/issues/108">#108</a>)</li>
      <li>See full diff in <a
      href="https://github.com/paritytech/review-bot/compare/v2.3.0...v2.4.0">compare
      view</a></li>
      </ul>
      </details>
      <br />
      
      
      [![Dependabot compatibility
      score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=paritytech/review-bot&package-manager=github_actions&previous-version=2.3.0&new-version=2.4.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
      
      Dependabot will resolve any conflicts with this PR as long as you don't
      alter it yourself. You can also trigger a rebase manually by commenting
      `@dependabot rebase`.
      
      [//]: # (dependabot-automerge-start)
      [//]: # (dependabot-automerge-end)
      
      ---
      
      <details>
      <summary>Dependabot commands and options</summary>
      <br />
      
      You can trigger Dependabot actions by commenting on this PR:
      - `@dependabot rebase` will rebase this PR
      - `@dependabot recreate` will recreate this PR, overwriting any edits
      that have been made to it
      - `@dependabot merge` will merge this PR after your CI passes on it
      - `@dependabot squash and merge` will squash and merge this PR after
      your CI passes on it
      - `@dependabot cancel merge` will cancel a previously requested merge
      and block automerging
      - `@dependabot reopen` will reopen this PR if it is closed
      - `@dependabot close` will close this PR and stop Dependabot recreating
      it. You can achieve the same result by closing it manually
      - `@dependabot show <dependency name> ignore conditions` will show all
      of the ignore conditions of the specified dependency
      - `@dependabot ignore this major version` will close this PR and stop
      Dependabot creating any more for this major version (unless you reopen
      the PR or upgrade to it yourself)
      - `@dependabot ignore this minor version` will close this PR and stop
      Dependabot creating any more for this minor version (unless you reopen
      the PR or upgrade to it yourself)
      - `@dependabot ignore this dependency` will close this PR and stop
      Dependabot creating any more for this dependency (unless you reopen the
      PR or upgrade to it yourself)
      
      
      </details>
      
      Signed-off-by: default avatardependabot[bot] <[email protected]>
      Co-authored-by: default avatardependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
      90849b66
  7. Feb 03, 2024
    • Nazar Mokrynskyi's avatar
      Expose internal functions used by `spawn_tasks` (#3166) · 12e5e19c
      Nazar Mokrynskyi authored
      
      
      This allows to build a custom version of `spawn_tasks` with less
      copy-paste required.
      
      Resolves https://github.com/paritytech/polkadot-sdk/issues/2110
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      12e5e19c
    • Cyrill Leutwiler's avatar
      Contracts: Stabilize `caller_is_root` API (#3154) · 966a8864
      Cyrill Leutwiler authored
      
      
      Can this API be marked stable? Implemented in [solang
      here](https://github.com/hyperledger/solang/pull/1620)
      
      ---------
      
      Signed-off-by: default avatarCyrill Leutwiler <[email protected]>
      966a8864
    • Koute's avatar
      Initial support for building RISC-V runtimes targeting PolkaVM (#3179) · e349fc9e
      Koute authored
      This PR adds initial support for building RISC-V runtimes targeting
      PolkaVM.
      
      - Setting the `SUBSTRATE_RUNTIME_TARGET=riscv` environment variable will
      now build a RISC-V runtime instead of a WASM runtime.
      - This only adds support for *building* runtimes; running them will need
      a PolkaVM-based executor, which I will add in a future PR.
      - Only building the minimal runtime is supported (building the Polkadot
      runtime doesn't work *yet* due to one of the dependencies).
      - The builder now sets a `substrate_runtime` cfg flag when building the
      runtimes, with the idea being that instead of doing `#[cfg(not(feature =
      "std"))]` or `#[cfg(target_arch = "wasm32")]` to detect that we're
      building a runtime you'll do `#[cfg(substrate_runtime)]`. (Switching the
      whole codebase to use this will be done in a future PR; I deliberately
      didn't do this here to keep this PR minimal and reviewable.)
      - Further renaming of things (e.g. types, environment variables and proc
      macro attributes having "wasm" in their name) to be target-agnostic will
      also be done in a future refactoring PR (while keeping backwards
      compatibility where it makes sense; I don't intend to break anyone's
      workflow or create unnecessary churn).
      - This PR also fixes two bugs in the `wasm-builder` crate:
      * The `RUSTC` environment variable is now removed when invoking the
      compiler. This prevents the toolchain version from being overridden when
      called from a `build.rs` script.
      * When parsing the `rustup toolchain list` output the `(default)` is now
      properly stripped and not treated as part of the version.
      - I've also added a minimal CI job that makes sure this doesn't break in
      the future. (cc @paritytech/ci)
      
      cc @athei
      
      
      
      ------
      
      Also, just a fun little tidbit: quickly comparing the size of the built
      runtimes it seems that the PolkaVM runtime is slightly smaller than the
      WASM one. (`production` build, with the `names` section substracted from
      the WASM's size to keep things fair, since for the PolkaVM runtime we're
      currently stripping out everything)
      
      - `.wasm`: 625505 bytes
      - `.wasm` (after wasm-opt -O3): 563205 bytes
      - `.wasm` (after wasm-opt -Os): 562987 bytes
      - `.wasm` (after wasm-opt -Oz): 536852 bytes
      - `.polkavm`: ~~580338 bytes~~ 550476 bytes (after enabling extra target
      features; I'll add those in another PR once we have an executor working)
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      e349fc9e
  8. Feb 02, 2024
    • dependabot[bot]'s avatar
      Bump wasmi from 0.31.0 to 0.31.2 (#3164) · 41db45a2
      dependabot[bot] authored
      
      
      Bumps [wasmi](https://github.com/paritytech/wasmi) from 0.31.0 to
      0.31.2.
      <details>
      <summary>Release notes</summary>
      <p><em>Sourced from <a
      href="https://github.com/paritytech/wasmi/releases">wasmi's
      releases</a>.</em></p>
      <blockquote>
      <h2>v0.31.1 - 2023-12-01</h2>
      <h3>Fixes</h3>
      <ul>
      <li>Fixed a bug, in the <code>wasmi</code> engine executor, that causes
      an out of bounds buffer write when calling or resuming a Wasm function
      with a high number of parameters from the host side.</li>
      </ul>
      </blockquote>
      </details>
      <details>
      <summary>Changelog</summary>
      <p><em>Sourced from <a
      href="https://github.com/paritytech/wasmi/blob/master/CHANGELOG.md">wasmi's
      changelog</a>.</em></p>
      <blockquote>
      <h1>Changelog</h1>
      <p>All notable changes to this project will be documented in this
      file.</p>
      <p>The format is loosely based on <a
      href="https://keepachangelog.com/en/1.0.0/">Keep a Changelog</a>,
      and this project adheres to <a
      href="https://semver.org/spec/v2.0.0.html">Semantic Versioning</a>.
      Additionally we have an <code>Internal</code> section for changes that
      are of interest to developers.</p>
      <p>Dates in this file are formattes as <code>YYYY-MM-DD</code>.</p>
      <h2>[<code>0.32.0-beta.5</code>] - 2024-01-15</h2>
      <p><strong>Note:</strong></p>
      <ul>
      <li>This is the beta of the upcoming <code>v0.32.0</code> release.
      This version is not production ready yet and might contain serious bugs.
      Please use this only for experimentation or at your own risk.</li>
      <li>Performance tests indicated that the new register-machine bytecode
      based
      Wasmi engine performance is very sensitive to hardware or OS specifics
      which may lead to very different performance characteristics.
      <ul>
      <li>We are working on fixing this until the stable release.</li>
      <li>Measurements concluded that execution performance can be equal or
      sometimes
      even surpass Wasm3 execution performance.</li>
      </ul>
      </li>
      </ul>
      <h3>Added</h3>
      <ul>
      <li>Added a new execution engine based on register-machine bytecode. (<a
      href="https://redirect.github.com/paritytech/wasmi/pull/729">paritytech/wasmi#729</a>)
      <ul>
      <li>The register-machine Wasmi <code>Engine</code> executes roughly
      80-100% faster and
      compiles roughly 30% slower according to benchmarks conducted so
      far.</li>
      </ul>
      </li>
      <li>Added <code>Module::new_unchecked</code> API. (<a
      href="https://redirect.github.com/paritytech/wasmi/pull/829">paritytech/wasmi#829</a>)
      <ul>
      <li>This allows to compile a Wasm module without Wasm validation which
      can be useful
      when users know that their inputs are valid Wasm binaries.</li>
      <li>This improves Wasm compilation performance for faster startup times
      by roughly 10-20%.</li>
      </ul>
      </li>
      <li>Added Wasm compilation modes. (<a
      href="https://redirect.github.com/paritytech/wasmi/pull/844">paritytech/wasmi#844</a>)
      <ul>
      <li>When using <code>Module::new</code> Wasmi eagerly compiles Wasm
      bytecode into Wasmi bytecode
      which is optimized for efficient execution. However, this compilation
      can become very
      costly especially for large Wasm binaries.</li>
      <li>The solution to this problem is to introduce new compilation modes,
      namely:
      <ul>
      <li><code>CompilationMode::Eager</code>: Eager compilation, what Wasmi
      did so far. (default)</li>
      <li><code>CompilationMode::LazyTranslation</code>: Eager Wasm validation
      and lazy Wasm translation.</li>
      <li><code>CompilationMode::Lazy</code>: Lazy Wasm validation and
      translation.</li>
      </ul>
      </li>
      <li>Benchmarks concluded that
      <ul>
      <li><code>CompilationMode::LazyTanslation</code>: Usually improves
      startup performance by a factor of 2 to 3.</li>
      <li><code>CompilationMode::Lazy</code>: Usually improves startup
      performance by a factor of up to 27.</li>
      </ul>
      </li>
      <li>Note that <code>CompilationMode::Lazy</code> can lead to partially
      validated Wasm modules
      which can introduce non-determinism when using different Wasm
      implementations.
      Therefore users should know what they are doing when using
      <code>CompilationMode::Lazy</code> if this is a concern.</li>
      <li>Enable lazy Wasm compilation with:
      <pre lang="rust"><code>let mut config = wasmi::Config::default();
      </code></pre>
      </li>
      </ul>
      </li>
      </ul>
      <!-- raw HTML omitted -->
      </blockquote>
      <p>... (truncated)</p>
      </details>
      <details>
      <summary>Commits</summary>
      <ul>
      <li><a
      href="https://github.com/paritytech/wasmi/commit/0218dfc74b4c4a83261d46d90ac83fb513fd6b3f"><code>0218dfc</code></a>
      Fix <code>InstanceCache</code> bug (<a
      href="https://redirect.github.com/paritytech/wasmi/issues/904">#904</a>)</li>
      <li><a
      href="https://github.com/paritytech/wasmi/commit/3fd0cc2b2d7b7a55142e6a6cffffbe4212ed00ae"><code>3fd0cc2</code></a>
      Bump <code>wasmi_arena</code> version (<a
      href="https://redirect.github.com/paritytech/wasmi/issues/903">#903</a>)</li>
      <li><a
      href="https://github.com/paritytech/wasmi/commit/86c874029eba5067f4ecd01bc3c4f6dacab5a16e"><code>86c8740</code></a>
      Fix <code>Sync</code> impl bug in <code>wasmi_arena</code> crate (<a
      href="https://redirect.github.com/paritytech/wasmi/issues/902">#902</a>)</li>
      <li><a
      href="https://github.com/paritytech/wasmi/commit/27def282b06613e770d0ab96de88b9909973a12b"><code>27def28</code></a>
      Bump actions/cache from 3 to 4 (<a
      href="https://redirect.github.com/paritytech/wasmi/issues/900">#900</a>)</li>
      <li><a
      href="https://github.com/paritytech/wasmi/commit/59f9acc4776c09a35c6d563609de6818e9b65084"><code>59f9acc</code></a>
      Fix typos (<a
      href="https://redirect.github.com/paritytech/wasmi/issues/899">#899</a>)</li>
      <li><a
      href="https://github.com/paritytech/wasmi/commit/4c06acd816ccde6f45f9cc16aac4e18d36066054"><code>4c06acd</code></a>
      Update and improve Wasmi's readme (<a
      href="https://redirect.github.com/paritytech/wasmi/issues/898">#898</a>)</li>
      <li><a
      href="https://github.com/paritytech/wasmi/commit/2354a20ecc5e4209af2ba7458a8c383789ad8b4f"><code>2354a20</code></a>
      Prepare release for Wasmi <code>v0.32.0 beta.5</code> (<a
      href="https://redirect.github.com/paritytech/wasmi/issues/893">#893</a>)</li>
      <li><a
      href="https://github.com/paritytech/wasmi/commit/a4dc251bf066c362a2fc6acf00da924659894c6d"><code>a4dc251</code></a>
      Fix heap buffer overflow due to Wasmi codegen bug (<a
      href="https://redirect.github.com/paritytech/wasmi/issues/892">#892</a>)</li>
      <li><a
      href="https://github.com/paritytech/wasmi/commit/e60da4979009370cb1149b29dbb612886854efa9"><code>e60da49</code></a>
      Add CI test job using LLVM's Address Sanitizer (<a
      href="https://redirect.github.com/paritytech/wasmi/issues/891">#891</a>)</li>
      <li><a
      href="https://github.com/paritytech/wasmi/commit/28c770ac9623d78ce10c67d9bec013e0d3a43bcb"><code>28c770a</code></a>
      Prepare release for Wasmi <code>v0.32.0-beta.4</code> (<a
      href="https://redirect.github.com/paritytech/wasmi/issues/889">#889</a>)</li>
      <li>Additional commits viewable in <a
      href="https://github.com/paritytech/wasmi/compare/v0.31.0...v0.31.2">compare
      view</a></li>
      </ul>
      </details>
      <br />
      
      
      [![Dependabot compatibility
      score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=wasmi&package-manager=cargo&previous-version=0.31.0&new-version=0.31.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
      
      Dependabot will resolve any conflicts with this PR as long as you don't
      alter it yourself. You can also trigger a rebase manually by commenting
      `@dependabot rebase`.
      
      [//]: # (dependabot-automerge-start)
      [//]: # (dependabot-automerge-end)
      
      ---
      
      <details>
      <summary>Dependabot commands and options</summary>
      <br />
      
      You can trigger Dependabot actions by commenting on this PR:
      - `@dependabot rebase` will rebase this PR
      - `@dependabot recreate` will recreate this PR, overwriting any edits
      that have been made to it
      - `@dependabot merge` will merge this PR after your CI passes on it
      - `@dependabot squash and merge` will squash and merge this PR after
      your CI passes on it
      - `@dependabot cancel merge` will cancel a previously requested merge
      and block automerging
      - `@dependabot reopen` will reopen this PR if it is closed
      - `@dependabot close` will close this PR and stop Dependabot recreating
      it. You can achieve the same result by closing it manually
      - `@dependabot show <dependency name> ignore conditions` will show all
      of the ignore conditions of the specified dependency
      - `@dependabot ignore <dependency name> major version` will close this
      group update PR and stop Dependabot creating any more for the specific
      dependency's major version (unless you unignore this specific
      dependency's major version or upgrade to it yourself)
      - `@dependabot ignore <dependency name> minor version` will close this
      group update PR and stop Dependabot creating any more for the specific
      dependency's minor version (unless you unignore this specific
      dependency's minor version or upgrade to it yourself)
      - `@dependabot ignore <dependency name>` will close this group update PR
      and stop Dependabot creating any more for the specific dependency
      (unless you unignore this specific dependency or upgrade to it yourself)
      - `@dependabot unignore <dependency name>` will remove all of the ignore
      conditions of the specified dependency
      - `@dependabot unignore <dependency name> <ignore condition>` will
      remove the ignore condition of the specified dependency and ignore
      conditions
      
      
      </details>
      
      Signed-off-by: default avatardependabot[bot] <[email protected]>
      Co-authored-by: default avatardependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
      41db45a2