Skip to content
Unverified Commit 0b52a2c1 authored by Alin Dima's avatar Alin Dima Committed by GitHub
Browse files

prospective-parachains rework: take II (#4937)

Resolves https://github.com/paritytech/polkadot-sdk/issues/4800

# Problem
In https://github.com/paritytech/polkadot-sdk/pull/4035, we removed
support for parachain forks and cycles and added support for backing
unconnected candidates (candidates for which we don't yet know the full
path to the latest included block), which is useful for elastic scaling
(parachains using multiple cores).

Removing support for backing forks turned out to be a bad idea, as there
are legitimate cases for a parachain to fork (if they have other
consensus mechanism for example, like BABE or PoW). This leads to
validators getting lower backing rewards (depending on whether they back
the winning fork or not) and a higher pressure on only the half of the
backing group (during availability-distribution for example). Since we
don't yet have approval voting rewards, backing rewards are a pretty big
deal (which may change in the future).

# Description

A backing group is now allowed to back forks. Once a candidate becomes
backed (has the minimum backing votes), we don't accept new forks unless
they adhere to the new fork selection rule (have a lower candidate
hash).
This helps with keeping the implementation simpler, since forks will
only be taken into account for candidates which are not backed yet (only
seconded).
Having this fork selection rule also helps with reducing the work
backing validators need to do, since they have a shared way of picking
the winning fork. Once they see a candidate backed, they can all decide
to back a fork and not accept new ones.
But they still accept new ones during the seconding phase (until the
backing quorum is reached).

Therefore, a block author which is not part of the backing group will
likely not even see the forks (only the winning one).

Just as before, a parachain producing forks will still not be able to
leverage elastic scaling but will still work with a single core. Also,
cycles are still not accepted.

## Some implementation details

`CandidateStorage` is no longer a subsystem-wide construct. It was
previously holding candidates from all relay chain forks and complicated
the code. Each fragment chain now holds their candidate chain and their
potential candidates. This should not increase the storage consumption
since the heavy candidate data is already wrapped in an Arc and shared.
It however allows for great simplifications and increase readability.

`FragmentChain`s are now only creating a chain with backed candidates
and the fork selection rule. As said before, `FragmentChain`s are now
also responsible for maintaining their own potential candidate storage.

Since we no longer have the subsytem-wide `CandidateStorage`, when
getting a new leaf update, we use the storage of our latest ancestor,
which may contain candidates seconded/backed that are still in scope.

When a candidate is backed, the fragment chains which hold it are
recreated (due to the fork selection rule, it could trigger a "reorg" of
the fragment chain).

I generally tried to simplify the subsystem and not introduce
unneccessary optimisations that would otherwise complicate the code and
not gain us much (fragment chains wouldn't realistically ever hold many
candidates)

TODO:
- [x] update metrics
- [x] update docs and comments
- [x] fix and add unit tests
- [x] tested with fork-producing parachain
- [x] tested with cycle-producing parachain
- [x] versi test
- [x] prdoc
parent 149c7093
Pipeline #488860 waiting for manual action with stages
in 38 minutes and 16 seconds