1. Dec 14, 2023
    • Egor_P's avatar
      [Backport] txn version bump from 1.5.0 (#2709) · 3e4e8c0b
      Egor_P authored
      This PR backports `transaction_version` bump from `1.5.0` release back
      to `master`
      3e4e8c0b
    • Andrei Sandu's avatar
      Introduce subsystem benchmarking tool (#2528) · 8a6e9ef1
      Andrei Sandu authored
      This tool makes it easy to run parachain consensus stress/performance
      testing on your development machine or in CI.
      
      ## Motivation
      The parachain consensus node implementation spans across many modules
      which we call subsystems. Each subsystem is responsible for a small part
      of logic of the parachain consensus pipeline, but in general the most
      load and performance issues are localized in just a few core subsystems
      like `availability-recovery`, `approval-voting` or
      `dispute-coordinator`. In the absence of such a tool, we would run large
      test nets to load/stress test these parts of the system. Setting up and
      making sense of the amount of data produced by such a large test is very
      expensive, hard to orchestrate and is a huge development time sink.
      
      ## PR contents
      - CLI tool 
      - Data Availability Read test
      - reusable mockups and components needed so far
      - Documentation on how to get started
      
      ### Data Availability Read test
      
      An overseer is built with using a real `availability-recovery` susbsytem
      instance while dependent subsystems like `av-store`, `network-bridge`
      and `runtime-api` are mocked. The network bridge will emulate all the
      network peers and their answering to requests.
      
      The test is going to be run for a number of blocks. For each block it
      will generate send a “RecoverAvailableData” request for an arbitrary
      number of candidates. We wait for the subsystem to respond to all
      requests before moving to the next block.
      At the same time we collect the usual subsystem metrics and task CPU
      metrics and show some nice progress reports while running.
      
      ### Here is how the CLI looks like:
      
      ```
      [2023-11-28T13:06:27Z INFO  subsystem_bench::core::display] n_validators = 1000, n_cores = 20, pov_size = 5120 - 5120, error = 3, latency = Some(PeerLatency { min_latency: 1ms, max_latency: 100ms })
      [2023-11-28T13:06:27Z INFO  subsystem-bench::availability] Generating template candidate index=0 pov_size=5242880
      [2023-11-28T13:06:27Z INFO  subsystem-bench::availability] Created test environment.
      [2023-11-28T13:06:27Z INFO  subsystem-bench::availability] Pre-generating 60 candidates.
      [2023-11-28T13:06:30Z INFO  subsystem-bench::core] Initializing network emulation for 1000 peers.
      [2023-11-28T13:06:30Z INFO  subsystem-bench::availability] Current block 1/3
      [2023-11-28T13:06:30Z INFO  substrate_prometheus_endpoint] ️ Prometheus exporter started at 127.0.0.1:9999
      [2023-11-28T13:06:30Z INFO  subsystem_bench::availability] 20 recoveries pending
      [2023-11-28T13:06:37Z INFO  subsystem_bench::availability] Block time 6262ms
      [2023-11-28T13:06:37Z INFO  subsystem-bench::availability] Sleeping till end of block (0ms)
      [2023-11-28T13:06:37Z INFO  subsystem-bench::availability] Current block 2/3
      [2023-11-28T13:06:37Z INFO  subsystem_bench::availability] 20 recoveries pending
      [2023-11-28T13:06:43Z INFO  subsystem_bench::availability] Block time 6369ms
      [2023-11-28T13:06:43Z INFO  subsystem-bench::availability] Sleeping till end of block (0ms)
      [2023-11-28T13:06:43Z INFO  subsystem-bench::availability] Current block 3/3
      [2023-11-28T13:06:43Z INFO  subsystem_bench::availability] 20 recoveries pending
      [2023-11-28T13:06:49Z INFO  subsystem_bench::availability] Block time 6194ms
      [2023-11-28T13:06:49Z INFO  subsystem-bench::availability] Sleeping till end of block (0ms)
      [2023-11-28T13:06:49Z INFO  subsystem_bench::availability] All blocks processed in 18829ms
      [2023-11-28T13:06:49Z INFO  subsystem_bench::availability] Throughput: 102400 KiB/block
      [2023-11-28T13:06:49Z INFO  subsystem_bench::availability] Block time: 6276 ms
      [2023-11-28T13:06:49Z INFO  subsystem_bench::availability] 
          
          Total received from network: 415 MiB
          Total sent to network: 724 KiB
          Total subsystem CPU usage 24.00s
          CPU usage per block 8.00s
          Total test environment CPU usage 0.15s
          CPU usage per block 0.05s
      ```
      
      ### Prometheus/Grafana stack in action
      <img width="1246" alt="Screenshot 2023-11-28 at 15 11 10"
      src="https://github.com/paritytech/polkadot-sdk/assets/54316454/eaa47422-4a5e-4a3a-aaef-14ca644c1574">
      <img width="1246" alt="Screenshot 2023-11-28 at 15 12 01"
      src="https://github.com/paritytech/polkadot-sdk/assets/54316454/237329d6-1710-4c27-8f67-5fb11d7f66ea">
      <img width="1246" alt="Screenshot 2023-11-28 at 15 12 38"
      src="https://github.com/paritytech/polkadot-sdk/assets/54316454/a07119e8-c9f1-4810-a1b3-f1b7b01cf357
      
      ">
      
      ---------
      
      Signed-off-by: default avatarAndrei Sandu <[email protected]>
      8a6e9ef1
  2. Dec 13, 2023
    • Marcin S.'s avatar
    • Squirrel's avatar
      Set clippy lints in workspace (requires rust 1.74) (#2390) · be8e6268
      Squirrel authored
      
      
      We currently use a bit of a hack in `.cargo/config` to make sure that
      clippy isn't too annoying by specifying the list of lints.
      
      There is now a stable way to define lints for a workspace. The only down
      side is that every crate seems to have to opt into this so there's a
      *few* files modified in this PR.
      
      Dependencies:
      
      - [x] PR that upgrades CI to use rust 1.74 is merged.
      
      ---------
      
      Co-authored-by: default avatarjoe petrowski <[email protected]>
      Co-authored-by: default avatarBranislav Kontur <[email protected]>
      Co-authored-by: default avatarLiam Aharon <[email protected]>
      be8e6268
    • Joshy Orndorff's avatar
      Rename `ExportGenesisStateCommand` to `ExportGenesisHeadCommand` and make it... · 8683bbee
      Joshy Orndorff authored
      Rename `ExportGenesisStateCommand` to `ExportGenesisHeadCommand` and make it respect custom genesis block builders (#2331)
      
      Closes #2326.
      
      This PR both fixes a logic bug and replaces an incorrect name.
      
      ## Bug Fix: Respecting custom genesis builder
      
      Prior to this PR the standard logic for creating a genesis block was
      repeated inside of cumulus. This PR removes that duplicated logic, and
      calls into the proper `BuildGenesisBlock` implementation.
      
      One consequence is that if the genesis block has already been
      initialized, it will not be re-created, but rather read from the
      database like it is for other node invocations. So you need to watch out
      for old unpurged data during the development process. Offchain tools may
      need to be updated accordingly. I've already filed
      https://github.com/paritytech/zombienet/issues/1519
      
      
      
      ## Rename: It doesn't export state. It exports head data.
      
      The name export-genesis-state was always wrong, nad it's never too late
      to right a wrong. I've changed the name of the struct to
      `ExportGenesisHeadCommand`.
      
      There is still the question of what to do with individual nodes' public
      CLIs. I have updated the parachain template to a reasonable default that
      preserves compatibility with tools that will expect
      `export-genesis-state` to still work. And I've chosen not to modify the
      public CLIs of any other nodes in the repo. I'll leave it up to their
      individual owners/maintains to decide whether that is appropriate.
      
      ---------
      
      Co-authored-by: default avatarJoshy Orndorff <[email protected]>
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      8683bbee
    • Alexandru Gheorghe's avatar
      Approve multiple candidates with a single signature (#1191) · a84dd0db
      Alexandru Gheorghe authored
      Initial implementation for the plan discussed here: https://github.com/paritytech/polkadot-sdk/issues/701
      Built on top of https://github.com/paritytech/polkadot-sdk/pull/1178
      v0: https://github.com/paritytech/polkadot/pull/7554,
      
      ## Overall idea
      
      When approval-voting checks a candidate and is ready to advertise the
      approval, defer it in a per-relay chain block until we either have
      MAX_APPROVAL_COALESCE_COUNT candidates to sign or a candidate has stayed
      MAX_APPROVALS_COALESCE_TICKS in the queue, in both cases we sign what
      candidates we have available.
      
      This should allow us to reduce the number of approvals messages we have
      to create/send/verify. The parameters are configurable, so we should
      find some values that balance:
      
      - Security of the network: Delaying broadcasting of an approval
      shouldn't but the finality at risk and to make sure that never happens
      we won't delay sending a vote if we are past 2/3 from the no-show time.
      - Scalability of the network: MAX_APPROVAL_COALESCE_COUNT = 1 &
      MAX_APPROVALS_COALESCE_TICKS =0, is what we have now and we know from
      the measurements we did on versi, it bottlenecks
      approval-distribution/approval-voting when increase significantly the
      number of validators and parachains
      - Block storage: In case of disputes we have to import this votes on
      chain and that increase the necessary storage with
      MAX_APPROVAL_COALESCE_COUNT * CandidateHash per vote. Given that
      disputes are not the normal way of the network functioning and we will
      limit MAX_APPROVAL_COALESCE_COUNT in the single digits numbers, this
      should be good enough. Alternatively, we could try to create a better
      way to store this on-chain through indirection, if that's needed.
      
      ## Other fixes:
      - Fixed the fact that we were sending random assignments to
      non-validators, that was wrong because those won't do anything with it
      and they won't gossip it either because they do not have a grid topology
      set, so we would waste the random assignments.
      - Added metrics to be able to debug potential no-shows and
      mis-processing of approvals/assignments.
      
      ## TODO:
      - [x] Get feedback, that this is moving in the right direction. @ordian
      @sandreim @eskimor @burdges, let me know what you think.
      - [x] More and more testing.
      - [x]  Test in versi.
      - [x] Make MAX_APPROVAL_COALESCE_COUNT &
      MAX_APPROVAL_COALESCE_WAIT_MILLIS a parachain host configuration.
      - [x] Make sure the backwards compatibility works correctly
      - [x] Make sure this direction is compatible with other streams of work:
      https://github.com/paritytech/polkadot-sdk/issues/635 &
      https://github.com/paritytech/polkadot-sdk/issues/742
      
      
      - [x] Final versi burn-in before merging
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <[email protected]>
      a84dd0db
  3. Dec 12, 2023
    • Branislav Kontur's avatar
      Ensure xcm versions over bridge (on sending chains) (#2481) · 575b8f8d
      Branislav Kontur authored
      ## Summary
      
      This pull request proposes a solution for improved control of the
      versioned XCM flow over the bridge (across different consensus chains)
      and resolves the situation where the sending chain/consensus has already
      migrated to a higher XCM version than the receiving chain/consensus.
      
      ## Problem/Motivation
      
      The current flow over the bridge involves a transfer from AssetHubRococo
      (AHR) to BridgeHubRococo (BHR) to BridgeHubWestend (BHW) and finally to
      AssetHubWestend (AHW), beginning with a reserve-backed transfer on AHR.
      
      In this process:
      1. AHR sends XCM `ExportMessage` through `XcmpQueue`, incorporating XCM
      version checks using the `WrapVersion` feature, influenced by
      `pallet_xcm::SupportedVersion` (managed by
      `pallet_xcm::force_xcm_version` or version discovery).
      
      2. BHR handles the `ExportMessage` instruction, utilizing the latest XCM
      version. The `HaulBlobExporter` converts the inner XCM to
      [`VersionedXcm::from`](https://github.com/paritytech/polkadot-sdk/blob/63ac2471aa0210f0ac9903bdd7d8f9351f9a635f/polkadot/xcm/xcm-builder/src/universal_exports.rs#L465-L467),
      also using the latest XCM version.
      
      However, challenges arise:
      - Incompatibility when BHW uses a different version than BHR. For
      instance, if BHR migrates to **XCMv4** while BHW remains on **XCMv3**,
      BHR's `VersionedXcm::from` uses `VersionedXcm::V4` variant, causing
      encoding issues for BHW.
        ```
      	/// Just a simulation of possible error, which could happen on BHW
      	/// (this code is based on actual master without XCMv4)
      	let encoded = hex_literal::hex!("0400");
      	println!("{:?}", VersionedXcm::<()>::decode(&mut &encoded[..]));
      
      Err(Error { cause: None, desc: "Could not decode `VersionedXcm`, variant
      doesn't exist" })
        ``` 
      - Similar compatibility issues exist between AHR and AHW.
      
      ## Solution
      
      This pull request introduces the following solutions:
      
      1. **New trait `CheckVersion`** - added to the `xcm` module and exposing
      `pallet_xcm::SupportedVersion`. This enhancement allows checking the
      actual XCM version for desired destinations outside of the `pallet_xcm`
      module.
      
      2. **Version Check in `HaulBlobExporter`** uses `CheckVersion` to check
      known/configured destination versions, ensuring compatibility. For
      example, in the scenario mentioned, BHR can store the version `3` for
      BHW. If BHR is on XCMv4, it will attempt to downgrade the message to
      version `3` instead of using the latest version `4`.
      
      3. **Version Check in `pallet-xcm-bridge-hub-router`** - this check
      ensures compatibility with the real destination's XCM version,
      preventing the unnecessary sending of messages to the local bridge hub
      if versions are incompatible.
      
      These additions aim to improve the control and compatibility of XCM
      flows over the bridge and addressing issues related to version
      mismatches.
      
      ## Possible alternative solution
      
      _(More investigation is needed, and at the very least, it should extend
      to XCMv4/5. If this proves to be a viable option, I can open an RFC for
      XCM.)._
      
      Add the `XcmVersion` attribute to the `ExportMessage` so that the
      sending chain can determine, based on what is stored in
      `pallet_xcm::SupportedVersion`, the version the destination is using.
      This way, we may not need to handle the version in `HaulBlobExporter`.
      
      ```
      ExportMessage {
      	network: NetworkId,
      	destination: InteriorMultiLocation,
      	xcm: Xcm<()>
      	destination_xcm_version: Version, // <- new attritbute
      },
      ```
      
      ```
      pub trait ExportXcm {
              fn validate(
      		network: NetworkId,
      		channel: u32,
      		universal_source: &mut Option<InteriorMultiLocation>,
      		destination: &mut Option<InteriorMultiLocation>,
      		message: &mut Option<Xcm<()>>,
                      destination_xcm_version: Version, , // <- new attritbute
      	) -> SendResult<Self::Ticket>;
      ```
      
      ## Future Directions
      
      This PR does not fix version discovery over bridge, further
      investigation will be conducted here:
      https://github.com/paritytech/polkadot-sdk/issues/2417.
      
      ## TODO
      
      - [x] `pallet_xcm` mock for tests uses hard-coded XCM version `2` -
      change to 3 or lastest?
      - [x] fix `pallet-xcm-bridge-hub-router`
      - [x] fix HaulBlobExporter with version determination
      [here](https://github.com/paritytech/polkadot-sdk/blob/2183669d05f9b510f979a0cc3c7847707bacba2e/polkadot/xcm/xcm-builder/src/universal_exports.rs#L465)
      - [x] add unit-tests to the runtimes
      - [x] run benchmarks for `ExportMessage`
      - [x] extend local run scripts about `force_xcm_version(dest, version)`
      - [ ] when merged, prepare governance calls for Rococo/Westend
      - [ ] add PRDoc
      
      Part of: https://github.com/paritytech/parity-bridges-common/issues/2719
      
      ---------
      
      Co-authored-by: command-bot <>
      575b8f8d
    • Chevdor's avatar
      Changelogs local generation (#1411) · 42a3afba
      Chevdor authored
      
      
      This PR introduces a script and some templates to use the prdoc involved
      in a release and build:
      - the changelog
      - a simple draft of audience documentation
      
      Since the prdoc presence was enforced in the middle of the version
      1.5.0, not all PRs did come with a `prdoc` file.
      This PR creates all the missing `prdoc` files with some minimum content
      allowing to properly generate the changelog.
      The generated content is **not** suitable for the audience
      documentation.
      
      The audience documentation will be possible with the next version, when
      all PR come with a proper `prdoc`.
      
      ## Assumptions
      
      - the prdoc files for release `vX.Y.Z` have been moved under
      `prdoc/X.Y.Z`
      - the changelog requires for now for the prdoc files to contain author +
      topic. Thos fields are optional.
      
      The build script can  be called as:
      ```
      VERSION=X.Y.Z ./scripts/release/build-changelogs.sh
      ```
      
      Related:
      -  #1408
      
      ---------
      
      Co-authored-by: default avatarEgorPopelyaev <[email protected]>
      42a3afba
    • Ross Bulat's avatar
      Staking: Add `deprecate_controller_batch` AdminOrigin call (#2589) · 048a9c27
      Ross Bulat authored
      
      
      Partially Addresses #2500
      
      Adds a `deprecate_controller_batch` call to the staking pallet that is
      callable by `Root` and `StakingAdmin`. To be used for controller account
      deprecation and removed thereafter. Adds
      `MaxControllersDeprecationBatch` pallet constant that defines max
      possible deprecations per call.
      
      - [x] Add `deprecate_controller_batch` call, and
      `MaxControllersInDeprecationBatch` constant.
      - [x] Add tests, benchmark, weights. Tests that weight is only consumed
      if unique pair.
      - [x] Adds `StakingAdmin` origin to staking's `AdminOrigin` type in
      westend runtime.
      - [x] Determined that worst case 5,900 deprecations does fit into
      `maxBlock` `proofSize` and `refTime` in both normal and operational
      thresholds, meaning we can deprecate all controllers for each network in
      one call.
      
      ## Block Weights
      
      By querying `consts.system.blockWeights` we can see that the
      `deprecate_controller_batch` weights fit within the `normal` threshold
      on Polkadot.
      
      #### `controller_deprecation_batch` where i = 5900:
      #### Ref time: 69,933,325,300
      #### Proof size: 21,040,390
      
      ### Polkadot 
      
      ```
      // consts.query.blockWeights
      
      maxBlock: {
              refTime: 2,000,000,000,000
              proofSize: 18,446,744,073,709,551,615
      }
      normal: {
       maxExtrinsic: {
      	refTime: 1,479,873,955,000
      	proofSize: 13,650,590,614,545,068,195
       }
       maxTotal: {
      	refTime: 1,500,000,000,000
      	proofSize: 13,835,058,055,282,163,711
       }
      }
      ```
      
      ### Kusama
      
      ```
      // consts.query.blockWeights
      
        maxBlock: {
          refTime: 2,000,000,000,000
          proofSize: 18,446,744,073,709,551,615
        }
          normal: {
            maxExtrinsic: {
              refTime: 1,479,875,294,000
              proofSize: 13,650,590,614,545,068,195
            }
            maxTotal: {
              refTime: 1,500,000,000,000
              proofSize: 13,835,058,055,282,163,711
            }
      }
      ```
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarGonçalo Pestana <[email protected]>
      048a9c27
  4. Dec 11, 2023
  5. Dec 08, 2023
    • Bastian Köcher's avatar
      Update proc-macro-crate (#2660) · f6ae1458
      Bastian Köcher authored
      f6ae1458
    • Muharem Ismailov's avatar
      Westend: Fellowship Treasury (#2532) · da40d97a
      Muharem Ismailov authored
      
      
      Treasury Pallet Instance for the Fellowship in Westend Collectives.
      
      In this update, we present a Treasury Pallet Instance that is under the
      control of the Fellowship body, with oversight from the Root and
      Treasurer origins. Here's how it is governed:
      - the Root origin have the authority to reject or approve spend
      proposals, with no amount limit for approvals.
      - the Treasurer origin have the authority to reject or approve spend
      proposals, with approval limits of up to 10,000,000 DOT.
      - Voice of all Fellows ranked at 3 or above can reject or approve spend
      proposals, with a maximum approval limit of 10,000 DOT.
      - Voice of Fellows ranked at 4 or above can also reject or approve spend
      proposals, with a maximum approval limit of 10,000,000 DOT.
      
      Additionally, we introduce the Asset Rate Pallet Instance to establish
      conversion rates from asset A to B. This is used to determine if a
      proposed spend amount involving a non-native asset is permissible by the
      commanding origin. The rates can be set up by the Root, Treasurer
      origins, or Voice of all Fellows.
      
      ---------
      
      Co-authored-by: default avatarjoe petrowski <[email protected]>
      Co-authored-by: default avatarjoepetrowski <[email protected]>
      da40d97a
  6. Dec 07, 2023
  7. Dec 06, 2023
  8. Dec 05, 2023
  9. Dec 04, 2023
  10. Dec 02, 2023
  11. Dec 01, 2023
  12. Nov 30, 2023
  13. Nov 29, 2023
    • Will | Paradox | ParaNodes.io's avatar
      Adding LuckyFriday's Bootnodes per IBP Application (#2538) · 2858cbc2
      Will | Paradox | ParaNodes.io authored
      Good day,
      
      This PR requests the inclusion of two bootnode entries for Polkadot,
      Kusama and Westend as part of LuckyFriday's IBP application. The nodes
      are hosted on self-owned hardware in a co-located facility. We've
      undertaken connectivity tests ourselves and also from members of the
      IBP.
      
      The test commands used are as follows:
      
      ```
      polkadot --chain westend --base-path /tmp/node --name "Boot" --reserved-only --reserved-nodes "/dns/ibp-boot-westend.luckyfriday.io/tcp/30334/wss/p2p/12D3KooWDg1YEytdwFFNWroFj6gio4YFsMB3miSbHKgdpJteUMB9" --no-hardware-benchmarks
      
      polkadot --chain westend --base-path /tmp/node --name "Boot" --reserved-only --reserved-nodes "/dns/ibp-boot-westend.luckyfriday.io/tcp/30333/p2p/12D3KooWDg1YEytdwFFNWroFj6gio4YFsMB3miSbHKgdpJteUMB9" --no-hardware-benchmarks
      
      polkadot --chain kusama --base-path /tmp/node --name "Boot" --reserved-only --reserved-nodes "/dns/ibp-boot-kusama.luckyfriday.io/tcp/30334/wss/p2p/12D3KooW9vu1GWHBuxyhm7rZgD3fhGZpNajPXFexadvhujWMgwfT" --no-hardware-benchmarks
      
      polkadot --chain kusama --base-path /tmp/node --name "Boot" --reserved-only --reserved-nodes "/dns/ibp-boot-kusama.luckyfriday.io/tcp/30333/p2p/12D3KooW9vu1GWHBuxyhm7rZgD3fhGZpNajPXFexadvhujWMgwfT" --no-hardware-benchmarks
      
      polkadot --chain polkadot --base-path /tmp/node --name "Boot" --reserved-only --reserved-nodes "/dns/ibp-boot-polkadot.luckyfriday.io/tcp/30334/wss/p2p/12D3KooWEjk6QXrZJ26fLpaajisJGHiz6WiQsR8k7mkM9GmWKnRZ" --no-hardware-benchmarks
      
      polkadot --chain polkadot --base-path /tmp/node --name "Boot" --reserved-only --reserved-nodes "/dns/ibp-boot-polkadot.luckyfriday.io/tcp/30333/p2p/12D3KooWEjk6QXrZJ26fLpaajisJGHiz6WiQsR8k7mkM9GmWKnRZ" --no-hardware-benchmarks
      ```
      
      All tests yielded syncing with 1 peer, a positive result. We have also
      backed up our node-key in the event that restoration is required.
      
      I hope that the aforementioned meets the requirement for inclusion and
      look forward to a speedy turnaround.
      
      Kind Regards,
      Will | Paradox
      2858cbc2
    • Bastian Köcher's avatar
      ParachainHost: No need to be generic over the block or hash type (#2537) · d3d301fa
      Bastian Köcher authored
      The `BlockNumber` and `Hash` are fixed types any way.
      d3d301fa
  14. Nov 28, 2023
    • Aaro Altonen's avatar
      Rework the event system of `sc-network` (#1370) · e71c484d
      Aaro Altonen authored
      This commit introduces a new concept called `NotificationService` which
      allows Polkadot protocols to communicate with the underlying
      notification protocol implementation directly, without routing events
      through `NetworkWorker`. This implies that each protocol has its own
      service which it uses to communicate with remote peers and that each
      `NotificationService` is unique with respect to the underlying
      notification protocol, meaning `NotificationService` for the transaction
      protocol can only be used to send and receive transaction-related
      notifications.
      
      The `NotificationService` concept introduces two additional benefits:
        * allow protocols to start using custom handshakes
        * allow protocols to accept/reject inbound peers
      
      Previously the validation of inbound connections was solely the
      responsibility of `ProtocolController`. This caused issues with light
      peers and `SyncingEngine` as `ProtocolController` would accept more
      peers than `SyncingEngine` could accept which caused peers to have
      differing views of their own states. `SyncingEngine` would reject excess
      peers but these rejections were not properly communicated to those peers
      causing them to assume that they were accepted.
      
      With `NotificationService`, the local handshake is not sent to remote
      peer if peer is rejected which allows it to detect that it was rejected.
      
      This commit also deprecates the use of `NetworkEventStream` for all
      notification-related events and going forward only DHT events are
      provided through `NetworkEventStream`. If protocols wish to follow each
      other's events, they must introduce additional abtractions, as is done
      for GRANDPA and transactions protocols by following the syncing protocol
      through `SyncEventStream`.
      
      Fixes https://github.com/paritytech/polkadot-sdk/issues/512
      Fixes https://github.com/paritytech/polkadot-sdk/issues/514
      Fixes https://github.com/paritytech/polkadot-sdk/issues/515
      Fixes https://github.com/paritytech/polkadot-sdk/issues/554
      Fixes https://github.com/paritytech/polkadot-sdk/issues/556
      
      ---
      These changes are transferred from
      https://github.com/paritytech/substrate/pull/14197
      
       but there are no
      functional changes compared to that PR
      
      ---------
      
      Co-authored-by: default avatarDmitry Markin <[email protected]>
      Co-authored-by: default avatarAlexandru Vasile <[email protected]>
      e71c484d
    • André Silva's avatar
      polkadot: remove grandpa pause support (#2511) · b0b4451f
      André Silva authored
      This was never used and we probably don't need it anyway.
      b0b4451f