Skip to content
  1. Mar 30, 2023
    • Bastian Köcher's avatar
      5d5cc9a0
    • Kian Paimani's avatar
      Fix nomiantion pools doc render (#13748) · 95da8d7a
      Kian Paimani authored
      Co-authored-by: parity-processbot <>
      95da8d7a
    • Adrian Catangiu's avatar
      BEEFY: gossip finality proofs (#13727) · 92c1229e
      Adrian Catangiu authored
      * sc-consensus-beefy: add justifications to gossip protocol
      
      * sc-consensus-beefy: voter gossips finality proofs
      
      * sc-consensus-beefy: add finality proof gossip test
      
      * sc-consensus-beefy: always gossip finality proof
      
      Gossip finality proof in _both_ cases of reaching finality threshold
      through votes:
      1. threshold reached through self vote,
      2. threshold reached through incoming vote.
      
      * address comments
      92c1229e
    • Alexander Theißen's avatar
      Build wasm for mvp cpu (#13758) · 25a616ce
      Alexander Theißen authored
      25a616ce
    • Roman Useinov's avatar
      [Enhancement] Throw an error when there are too many pallets (#13763) · cc3152bc
      Roman Useinov authored
      
      
      * [Enhancement] Throw an error when there are too many pallets
      
      * fix ui test
      
      * fix PR comments
      
      * Update frame/support/procedural/src/construct_runtime/mod.rs
      
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      
      * Update frame/support/procedural/src/construct_runtime/mod.rs
      
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      
      * ".git/.scripts/commands/fmt/fmt.sh"
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      Co-authored-by: command-bot <>
      cc3152bc
    • Aaro Altonen's avatar
      Attempt to relieve pressure on `mpsc_network_worker` (#13725) · 4240490d
      Aaro Altonen authored
      * Attempt to relieve pressure on `mpsc_network_worker`
      
      `SyncingEngine` interacting with `NetworkWorker` can put a lot of strain
      on the channel if the number of inbound connections is high. This is
      because `SyncingEngine` is notified of each inbound substream which it
      then can either accept or reject and this causes a lot of message
      exchange on the already busy channel.
      
      Use a direct channel pair between `Protocol` and `SyncingEngine`
      to exchange notification events. It is a temporary change to alleviate
      the problems caused by syncing being an independent protocol and the
      fix will be removed once `NotificationService` is implemented.
      
      * Apply review comments
      
      * fixes
      
      * trigger ci
      
      * Fix tests
      
      Verify that both peers have a connection now that the validation goes
      through `SyncingEngine`. Depending on how the tasks are scheduled,
      one of them might not have the peer registered in `SyncingEngine` at which
      point the test won't make any progress because block announcement received
      from an unknown peer is discarded.
      
      Move polling of `ChainSync` at the end of the function so that if a block
      announcement causes a block request to be sent, that can be sent in the
      same call to `SyncingEngine::poll()`.
      
      ---------
      
      Co-authored-by: parity-processbot <>
      4240490d
    • Davide Galassi's avatar
      Application Crypto cleanup (#13746) · 7985495b
      Davide Galassi authored
      
      
      * Adjust application crypto docs
      
      * Blanket implementation for 'RuntimeAppPublic' trait
      
      * Blanket implementation for 'BoundToRuntimeAppPublic' for 'RuntimeAppPublic'
      
      * Relax type bounds
      
      * Docs fix
      
      * restore MaybeHash
      
      * Commit suggestion
      
      Co-authored-by: default avatarAnton <[email protected]>
      
      ---------
      
      Co-authored-by: default avatarAnton <[email protected]>
      7985495b
  2. Mar 29, 2023
  3. Mar 28, 2023
  4. Mar 27, 2023
  5. Mar 26, 2023
  6. Mar 25, 2023
  7. Mar 24, 2023
  8. Mar 23, 2023