Unverified Commit 35ea1c4b authored by asynchronous rob's avatar asynchronous rob Committed by GitHub
Browse files

Implement Approval Voting Subsystem (#2112)



* skeleton

* skeleton aux-schema module

* start approval types

* start aux schema with aux store

* doc

* finish basic types

* start approval types

* doc

* finish basic types

* write out schema types

* add debug and codec impls to approval types

* add debug and codec impls to approval types

also add some key computation

* add debug and codec impls to approval types

* getters for block and candidate entries

* grumbles

* remove unused AssignmentId

* load_decode utility

* implement DB clearing

* function for adding new block entry to aux store

* start `canonicalize` implementation

* more skeleton

* finish implementing canonicalize

* tag TODO

* implement a test AuxStore

* add allow(unused)

* basic loading and deleting test

* block_entry test function

* add a test for `add_block_entry`

* ensure range is exclusive at end

* test clear()

* test that add_block sets children

* add a test for canonicalize

* extract Pre-digest from header

* utilities for extracting RelayVRFStory from the header-chain

* add approval voting message types

* approval distribution message type

* subsystem skeleton

* state struct

* add futures-timer

* prepare service for babe slot duration

* more skeleton

* better integrate AuxStore

* RelayVRF -> RelayVRFStory

* canonicalize

* implement some tick functionality

* guide: tweaks

* check_approval

* more tweaks and helpers

* guide: add core index to candidate event

* primitives: add core index to candidate event

* runtime: add core index to candidate events

* head handling (session window)

* implement `determine_new_blocks`

* add TODO

* change error type on functions

* compute RelayVRFModulo assignments

* compute RelayVRFDelay assignments

* fix delay tranche calc

* assignment checking

* pluralize

* some dummy code for fetching assignments

* guide: add babe epoch runtime API

* implement a current_epoch() runtime API

* compute assignments

* candidate events get backing group

* import blocks and assignments into DB

* push block approval meta

* add message types, no overseer integration yet

* notify approval distribution of new blocks

* refactor import into separate functions

* impl tranches_to_approve

* guide: improve function signatures

* guide: remove Tick from ApprovalEntry

* trigger and broadcast assignment

* most of approval launching

* remove byteorder crate

* load blocks back to finality, except on startup

* check unchecked assignments

* add claimed core to approval voting message

* fix checks

* assign only to backing group

* remove import_checked_assignment from guide

* newline

* import assignments

* abstract out a bit

* check and import approvals

* check full approvals from assignment import too

* comment

* create a Transaction utility

* must_use

* use transaction in `check_full_approvals`

* wire up wakeups

* add Ord to CandidateHash

* wakeup refactoring

* return candidate info from add_block_entry

* schedule wakeups

* background task: do candidate validation

* forward candidate validation requests

* issue approval votes when requested

* clean up a couple TODOs

* fix up session caching

* clean up last unimplemented!() items

* fix remaining warnings

* remove TODO

* implement handle_approved_ancestor

* update Cargo.lock

* fix runtime API tests

* guide: cleanup assignment checking

* use claimed candidate index instead of core

* extract time to a trait

* tests module

* write a mock clock for testing

* allow swapping out the clock

* make abstract over assignment criteria

* add some skeleton tests and simplify params

* fix backing group check

* do backing group check inside check_assignment_cert

* write some empty test functions to implement

* add a test for non-backing

* test that produced checks pass

* some empty test ideas

* runtime/inclusion: remove outdated TODO

* fix compilation

* av-store: fix tests

* dummy cert

* criteria tests

* move `TestStore` to main tests file

* fix unused warning

* test harness beginnings

* resolve slots renaming fallout

* more compilation fixes

* wip: extract pure data into a separate module

* wip: extract pure data into a separate module

* move types completely to v1

* add persisted_entries

* add conversion trait impls

* clean up some warnings

* extract import logic to own module

* schedule wakeups

* experiment with Actions

* uncomment approval-checking

* separate module for approval checking utilities

* port more code to use actions

* get approval pipeline using actions

* all logic is uncommented

* main loop processes actions

* all loop logic uncommented

* separate function for handling actions

* remove last unimplemented item

* clean up warnings

* State gives read-only access to underlying DB

* tests for approval checking

* tests for approval criteria

* skeleton test module for import

* list of import tests to do

* some test glue code

* test reject bad assignment

* test slot too far in future

* test reject assignment with unknown candidate

* remove loads_blocks tests

* determine_new_blocks back to finalized & harness

* more coverage for determining new blocks

* make `imported_block_info` have less reliance on State

* candidate_info tests

* tests for session caching

* remove println

* extricate DB and main TestStores

* rewrite approval checking logic to counteract early delays

* move state out of function

* update approval-checking tests

* tweak wakeups & scheduling logic

* rename check_full_approvals

* test that assignment import updates candidate

* some approval import tests

* some tests for check_and_apply_approval

* add 'full' qualifier to avoid confusion

* extract should-trigger logic to separate function

* some tests for all triggering

* tests for when we trigger assignments

* test wakeups

* add block utilities for testing

* some more tests for approval updates

* approved_ancestor tests

* new action type for launch approval

* process-wakeup tests

* clean up some warnings

* fix in_future test

* approval checking tests

* tighten up too-far-in-future

* special-case genesis when caching sessions

* fix bitfield len

Co-authored-by: Andronik Ordian's avatarAndronik Ordian <write@reusable.software>
parent 43771764
Pipeline #123893 passed with stages
in 20 minutes and 20 seconds
......@@ -5235,16 +5235,33 @@ dependencies = [
name = "polkadot-node-core-approval-voting"
version = "0.1.0"
dependencies = [
"assert_matches",
"bitvec",
"futures 0.3.12",
"futures-timer 3.0.2",
"maplit",
"merlin",
"parity-scale-codec",
"parking_lot 0.11.1",
"polkadot-node-primitives",
"polkadot-node-subsystem",
"polkadot-node-subsystem-test-helpers",
"polkadot-overseer",
"polkadot-primitives",
"rand_core 0.5.1",
"sc-client-api",
"sc-keystore",
"schnorrkel",
"sp-application-crypto",
"sp-blockchain",
"sp-consensus-babe",
"sp-consensus-slots",
"sp-core",
"sp-keyring",
"sp-keystore",
"sp-runtime",
"tracing",
"tracing-futures",
]
[[package]]
......@@ -5412,11 +5429,13 @@ dependencies = [
"futures 0.3.12",
"memory-lru",
"parity-util-mem",
"polkadot-node-primitives",
"polkadot-node-subsystem",
"polkadot-node-subsystem-test-helpers",
"polkadot-node-subsystem-util",
"polkadot-primitives",
"sp-api",
"sp-consensus-babe",
"sp-core",
"tracing",
"tracing-futures",
......@@ -5460,10 +5479,13 @@ dependencies = [
"parity-scale-codec",
"polkadot-primitives",
"polkadot-statement-table",
"sp-consensus-slots",
"schnorrkel",
"sp-application-crypto",
"sp-consensus-babe",
"sp-consensus-vrf",
"sp-core",
"sp-runtime",
"thiserror",
]
[[package]]
......
......@@ -6,16 +6,33 @@ edition = "2018"
[dependencies]
futures = "0.3.8"
futures-timer = "3.0.2"
parity-scale-codec = { version = "2.0.0", default-features = false, features = ["bit-vec", "derive"] }
tracing = "0.1.22"
tracing-futures = "0.2.4"
bitvec = { version = "0.20.1", default-features = false, features = ["alloc"] }
merlin = "2.0"
schnorrkel = "0.9.1"
polkadot-subsystem = { package = "polkadot-node-subsystem", path = "../../subsystem" }
polkadot-overseer = { path = "../../overseer" }
polkadot-primitives = { path = "../../../primitives" }
polkadot-node-primitives = { path = "../../primitives" }
bitvec = "0.20.1"
sc-client-api = { git = "https://github.com/paritytech/substrate", branch = "master", default-features = false }
sc-keystore = { git = "https://github.com/paritytech/substrate", branch = "master", default-features = false }
sp-consensus-slots = { git = "https://github.com/paritytech/substrate", branch = "master", default-features = false }
sp-blockchain = { git = "https://github.com/paritytech/substrate", branch = "master", default-features = false }
sp-application-crypto = { git = "https://github.com/paritytech/substrate", branch = "master", default-features = false, features = ["full_crypto"] }
sp-runtime = { git = "https://github.com/paritytech/substrate", branch = "master", default-features = false }
[dev-dependencies]
\ No newline at end of file
[dev-dependencies]
parking_lot = "0.11.1"
rand_core = "0.5.1" # should match schnorrkel
sp-keyring = { git = "https://github.com/paritytech/substrate", branch = "master" }
sp-keystore = { git = "https://github.com/paritytech/substrate", branch = "master" }
sp-core = { git = "https://github.com/paritytech/substrate", branch = "master" }
sp-consensus-babe = { git = "https://github.com/paritytech/substrate", branch = "master" }
maplit = "1.0.2"
polkadot-node-subsystem-test-helpers = { path = "../../subsystem-test-helpers" }
assert_matches = "1.4.0"
This diff is collapsed.
// Copyright 2020 Parity Technologies (UK) Ltd.
// This file is part of Polkadot.
// Polkadot is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Polkadot is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
//! Approval DB accessors and writers for on-disk persisted approval storage
//! data.
//!
//! We persist data to disk although it is not intended to be used across runs of the
//! program. This is because under medium to long periods of finality stalling, for whatever
//! reason that may be, the amount of data we'd need to keep would be potentially too large
//! for memory.
//!
//! With tens or hundreds of parachains, hundreds of validators, and parablocks
//! in every relay chain block, there can be a humongous amount of information to reference
//! at any given time.
//!
//! As such, we provide a function from this module to clear the database on start-up.
//! In the future, we may use a temporary DB which doesn't need to be wiped, but for the
//! time being we share the same DB with the rest of Substrate.
pub mod v1;
......@@ -14,27 +14,10 @@
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
//! Auxiliary DB schema, accessors, and writers for on-disk persisted approval storage
//! data.
//!
//! We persist data to disk although it is not intended to be used across runs of the
//! program. This is because under medium to long periods of finality stalling, for whatever
//! reason that may be, the amount of data we'd need to keep would be potentially too large
//! for memory.
//!
//! With tens or hundreds of parachains, hundreds of validators, and parablocks
//! in every relay chain block, there can be a humongous amount of information to reference
//! at any given time.
//!
//! As such, we provide a function from this module to clear the database on start-up.
//! In the future, we may use a temporary DB which doesn't need to be wiped, but for the
//! time being we share the same DB with the rest of Substrate.
// TODO https://github.com/paritytech/polkadot/issues/1975: remove this
#![allow(unused)]
//! Version 1 of the DB schema.
use sc_client_api::backend::AuxStore;
use polkadot_node_primitives::approval::{DelayTranche, RelayVRF};
use polkadot_node_primitives::approval::{DelayTranche, AssignmentCert};
use polkadot_primitives::v1::{
ValidatorIndex, GroupIndex, CandidateReceipt, SessionIndex, CoreIndex,
BlockNumber, Hash, CandidateHash,
......@@ -46,73 +29,95 @@ use std::collections::{BTreeMap, HashMap};
use std::collections::hash_map::Entry;
use bitvec::{vec::BitVec, order::Lsb0 as BitOrderLsb0};
use super::Tick;
#[cfg(test)]
mod tests;
// slot_duration * 2 + DelayTranche gives the number of delay tranches since the
// unix epoch.
#[derive(Encode, Decode, Clone, Copy, Debug, PartialEq)]
pub struct Tick(u64);
pub type Bitfield = BitVec<BitOrderLsb0, u8>;
const STORED_BLOCKS_KEY: &[u8] = b"Approvals_StoredBlocks";
/// Details pertaining to our assignment on a block.
#[derive(Encode, Decode, Debug, Clone, PartialEq)]
pub struct OurAssignment {
pub cert: AssignmentCert,
pub tranche: DelayTranche,
pub validator_index: ValidatorIndex,
// Whether the assignment has been triggered already.
pub triggered: bool,
}
/// Metadata regarding a specific tranche of assignments for a specific candidate.
#[derive(Debug, Clone, Encode, Decode, PartialEq)]
pub(crate) struct TrancheEntry {
tranche: DelayTranche,
#[derive(Encode, Decode, Debug, Clone, PartialEq)]
pub struct TrancheEntry {
pub tranche: DelayTranche,
// Assigned validators, and the instant we received their assignment, rounded
// to the nearest tick.
assignments: Vec<(ValidatorIndex, Tick)>,
pub assignments: Vec<(ValidatorIndex, Tick)>,
}
/// Metadata regarding approval of a particular candidate within the context of some
/// particular block.
#[derive(Debug, Clone, Encode, Decode, PartialEq)]
pub(crate) struct ApprovalEntry {
tranches: Vec<TrancheEntry>,
backing_group: GroupIndex,
// When the next wakeup for this entry should occur. This is either to
// check a no-show or to check if we need to broadcast an assignment.
next_wakeup: Tick,
our_assignment: Option<OurAssignment>,
#[derive(Encode, Decode, Debug, Clone, PartialEq)]
pub struct ApprovalEntry {
pub tranches: Vec<TrancheEntry>,
pub backing_group: GroupIndex,
pub our_assignment: Option<OurAssignment>,
// `n_validators` bits.
assignments: BitVec<BitOrderLsb0, u8>,
approved: bool,
pub assignments: Bitfield,
pub approved: bool,
}
/// Metadata regarding approval of a particular candidate.
#[derive(Debug, Clone, Encode, Decode, PartialEq)]
pub(crate) struct CandidateEntry {
candidate: CandidateReceipt,
session: SessionIndex,
#[derive(Encode, Decode, Debug, Clone, PartialEq)]
pub struct CandidateEntry {
pub candidate: CandidateReceipt,
pub session: SessionIndex,
// Assignments are based on blocks, so we need to track assignments separately
// based on the block we are looking at.
block_assignments: BTreeMap<Hash, ApprovalEntry>,
approvals: BitVec<BitOrderLsb0, u8>,
pub block_assignments: BTreeMap<Hash, ApprovalEntry>,
pub approvals: Bitfield,
}
/// Metadata regarding approval of a particular block, by way of approval of the
/// candidates contained within it.
#[derive(Debug, Clone, Encode, Decode, PartialEq)]
pub(crate) struct BlockEntry {
block_hash: Hash,
session: SessionIndex,
slot: Slot,
relay_vrf_story: RelayVRF,
#[derive(Encode, Decode, Debug, Clone, PartialEq)]
pub struct BlockEntry {
pub block_hash: Hash,
pub session: SessionIndex,
pub slot: Slot,
/// Random bytes derived from the VRF submitted within the block by the block
/// author as a credential and used as input to approval assignment criteria.
pub relay_vrf_story: [u8; 32],
// The candidates included as-of this block and the index of the core they are
// leaving. Sorted ascending by core index.
candidates: Vec<(CoreIndex, CandidateHash)>,
pub candidates: Vec<(CoreIndex, CandidateHash)>,
// A bitfield where the i'th bit corresponds to the i'th candidate in `candidates`.
// The i'th bit is `true` iff the candidate has been approved in the context of this
// block. The block can be considered approved if the bitfield has all bits set to `true`.
approved_bitfield: BitVec<BitOrderLsb0, u8>,
children: Vec<Hash>,
pub approved_bitfield: Bitfield,
pub children: Vec<Hash>,
}
/// A range from earliest..last block number stored within the DB.
#[derive(Debug, Clone, Encode, Decode, PartialEq)]
pub(crate) struct StoredBlockRange(BlockNumber, BlockNumber);
#[derive(Encode, Decode, Debug, Clone, PartialEq)]
pub struct StoredBlockRange(BlockNumber, BlockNumber);
impl From<crate::Tick> for Tick {
fn from(tick: crate::Tick) -> Tick {
Tick(tick)
}
}
// TODO https://github.com/paritytech/polkadot/issues/1975: probably in lib.rs
#[derive(Debug, Clone, Encode, Decode, PartialEq)]
pub(crate) struct OurAssignment { }
impl From<Tick> for crate::Tick {
fn from(tick: Tick) -> crate::Tick {
tick.0
}
}
/// Canonicalize some particular block, pruning everything before it and
/// pruning any competing branches at the same height.
......@@ -351,9 +356,9 @@ fn load_decode<D: Decode>(store: &impl AuxStore, key: &[u8])
/// candidate and approval entries.
#[derive(Clone)]
pub(crate) struct NewCandidateInfo {
candidate: CandidateReceipt,
backing_group: GroupIndex,
our_assignment: Option<OurAssignment>,
pub candidate: CandidateReceipt,
pub backing_group: GroupIndex,
pub our_assignment: Option<OurAssignment>,
}
/// Record a new block entry.
......@@ -364,7 +369,8 @@ pub(crate) struct NewCandidateInfo {
/// parent hash.
///
/// Has no effect if there is already an entry for the block or `candidate_info` returns
/// `None` for any of the candidates referenced by the block entry.
/// `None` for any of the candidates referenced by the block entry. In these cases,
/// no information about new candidates will be referred to by this function.
pub(crate) fn add_block_entry(
store: &impl AuxStore,
parent_hash: Hash,
......@@ -372,7 +378,7 @@ pub(crate) fn add_block_entry(
entry: BlockEntry,
n_validators: usize,
candidate_info: impl Fn(&CandidateHash) -> Option<NewCandidateInfo>,
) -> sp_blockchain::Result<()> {
) -> sp_blockchain::Result<Vec<(CandidateHash, CandidateEntry)>> {
let session = entry.session;
let new_block_range = {
......@@ -392,13 +398,15 @@ pub(crate) fn add_block_entry(
let mut blocks_at_height = load_blocks_at_height(store, number)?;
if blocks_at_height.contains(&entry.block_hash) {
// seems we already have a block entry for this block. nothing to do here.
return Ok(())
return Ok(Vec::new())
}
blocks_at_height.push(entry.block_hash);
(blocks_at_height_key(number), blocks_at_height.encode())
};
let mut candidate_entries = Vec::with_capacity(entry.candidates.len());
let candidate_entry_updates = {
let mut updated_entries = Vec::with_capacity(entry.candidates.len());
for &(_, ref candidate_hash) in &entry.candidates {
......@@ -407,7 +415,7 @@ pub(crate) fn add_block_entry(
backing_group,
our_assignment,
} = match candidate_info(candidate_hash) {
None => return Ok(()),
None => return Ok(Vec::new()),
Some(info) => info,
};
......@@ -424,7 +432,6 @@ pub(crate) fn add_block_entry(
ApprovalEntry {
tranches: Vec::new(),
backing_group,
next_wakeup: 0,
our_assignment,
assignments: bitvec::bitvec![BitOrderLsb0, u8; 0; n_validators],
approved: false,
......@@ -434,6 +441,8 @@ pub(crate) fn add_block_entry(
updated_entries.push(
(candidate_entry_key(&candidate_hash), candidate_entry.encode())
);
candidate_entries.push((*candidate_hash, candidate_entry));
}
updated_entries
......@@ -466,11 +475,61 @@ pub(crate) fn add_block_entry(
store.insert_aux(&all_keys_and_values, &[])?;
Ok(())
Ok(candidate_entries)
}
// An atomic transaction of multiple candidate or block entries.
#[derive(Default)]
#[must_use = "Transactions do nothing unless written to a DB"]
pub struct Transaction {
block_entries: HashMap<Hash, BlockEntry>,
candidate_entries: HashMap<CandidateHash, CandidateEntry>,
}
impl Transaction {
/// Put a block entry in the transaction, overwriting any other with the
/// same hash.
pub(crate) fn put_block_entry(&mut self, entry: BlockEntry) {
let hash = entry.block_hash;
let _ = self.block_entries.insert(hash, entry);
}
/// Put a candidate entry in the transaction, overwriting any other with the
/// same hash.
pub(crate) fn put_candidate_entry(&mut self, hash: CandidateHash, entry: CandidateEntry) {
let _ = self.candidate_entries.insert(hash, entry);
}
/// Write the contents of the transaction, atomically, to the DB.
pub(crate) fn write(self, db: &impl AuxStore) -> sp_blockchain::Result<()> {
if self.block_entries.is_empty() && self.candidate_entries.is_empty() {
return Ok(())
}
let blocks: Vec<_> = self.block_entries.into_iter().map(|(hash, entry)| {
let k = block_entry_key(&hash);
let v = entry.encode();
(k, v)
}).collect();
let candidates: Vec<_> = self.candidate_entries.into_iter().map(|(hash, entry)| {
let k = candidate_entry_key(&hash);
let v = entry.encode();
(k, v)
}).collect();
let kv = blocks.iter().map(|(k, v)| (&k[..], &v[..]))
.chain(candidates.iter().map(|(k, v)| (&k[..], &v[..])))
.collect::<Vec<_>>();
db.insert_aux(&kv, &[])
}
}
/// Load the stored-blocks key from the state.
pub(crate) fn load_stored_blocks(store: &impl AuxStore)
fn load_stored_blocks(store: &impl AuxStore)
-> sp_blockchain::Result<Option<StoredBlockRange>>
{
load_decode(store, STORED_BLOCKS_KEY)
......
......@@ -17,8 +17,8 @@
//! Tests for the aux-schema of approval voting.
use super::*;
use std::cell::RefCell;
use polkadot_primitives::v1::Id as ParaId;
use std::cell::RefCell;
#[derive(Default)]
struct TestStore {
......@@ -49,28 +49,28 @@ impl AuxStore for TestStore {
}
impl TestStore {
fn write_stored_blocks(&self, range: StoredBlockRange) {
pub(crate) fn write_stored_blocks(&self, range: StoredBlockRange) {
self.inner.borrow_mut().insert(
STORED_BLOCKS_KEY.to_vec(),
range.encode(),
);
}
fn write_blocks_at_height(&self, height: BlockNumber, blocks: &[Hash]) {
pub(crate) fn write_blocks_at_height(&self, height: BlockNumber, blocks: &[Hash]) {
self.inner.borrow_mut().insert(
blocks_at_height_key(height).to_vec(),
blocks.encode(),
);
}
fn write_block_entry(&self, block_hash: &Hash, entry: &BlockEntry) {
pub(crate) fn write_block_entry(&self, block_hash: &Hash, entry: &BlockEntry) {
self.inner.borrow_mut().insert(
block_entry_key(block_hash).to_vec(),
entry.encode(),
);
}
fn write_candidate_entry(&self, candidate_hash: &CandidateHash, entry: &CandidateEntry) {
pub(crate) fn write_candidate_entry(&self, candidate_hash: &CandidateHash, entry: &CandidateEntry) {
self.inner.borrow_mut().insert(
candidate_entry_key(candidate_hash).to_vec(),
entry.encode(),
......@@ -89,8 +89,8 @@ fn make_block_entry(
BlockEntry {
block_hash,
session: 1,
slot: 1.into(),
relay_vrf_story: RelayVRF([0u8; 32]),
slot: Slot::from(1),
relay_vrf_story: [0u8; 32],
approved_bitfield: make_bitvec(candidates.len()),
candidates,
children: Vec::new(),
......@@ -129,7 +129,6 @@ fn read_write() {
(hash_a, ApprovalEntry {
tranches: Vec::new(),
backing_group: GroupIndex(1),
next_wakeup: 1000,
our_assignment: None,
assignments: Default::default(),
approved: false,
......@@ -156,7 +155,7 @@ fn read_write() {
];
let delete_keys: Vec<_> = delete_keys.iter().map(|k| &k[..]).collect();
store.insert_aux(&[], &delete_keys);
store.insert_aux(&[], &delete_keys).unwrap();
assert!(load_stored_blocks(&store).unwrap().is_none());
assert!(load_blocks_at_height(&store, 1).unwrap().is_empty());
......@@ -296,7 +295,6 @@ fn clear_works() {
(hash_a, ApprovalEntry {
tranches: Vec::new(),
backing_group: GroupIndex(1),
next_wakeup: 1000,
our_assignment: None,
assignments: Default::default(),
approved: false,
......@@ -331,7 +329,7 @@ fn canonicalize_works() {
// -> B1 -> C1 -> D1
// A -> B2 -> C2 -> D2
//
// We'll canonicalize C1. Everything except D1 should disappear.
// We'll canonicalize C1. Everytning except D1 should disappear.
//
// Candidates:
// Cand1 in B2
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
// Copyright 2020 Parity Technologies (UK) Ltd.
// This file is part of Polkadot.
// Polkadot is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Polkadot is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
//! Entries pertaining to approval which need to be persisted.
//!
//! The actual persisting of data is handled by the `approval_db` module.
//! Within that context, things are plain-old-data. Within this module,
//! data and logic are intertwined.
use polkadot_node_primitives::approval::{DelayTranche, RelayVRFStory, AssignmentCert};
use polkadot_primitives::v1::{
ValidatorIndex, CandidateReceipt, SessionIndex, GroupIndex, CoreIndex,
Hash, CandidateHash,
};
use sp_consensus_slots::Slot;
use std::collections::BTreeMap;
use bitvec::{slice::BitSlice, vec::BitVec, order::Lsb0 as BitOrderLsb0};
use super::time::Tick;
use super::criteria::OurAssignment;
/// Metadata regarding a specific tranche of assignments for a specific candidate.
#[derive(Debug, Clone, PartialEq)]
pub struct TrancheEntry {
tranche: DelayTranche,
// Assigned validators, and the instant we received their assignment, rounded
// to the nearest tick.
assignments: Vec<(ValidatorIndex, Tick)>,
}
impl TrancheEntry {
/// Get the tranche of this entry.
pub fn tranche(&self) -> DelayTranche {
self.tranche
}
/// Get the assignments for this entry.
pub fn assignments(&self) -> &[(ValidatorIndex, Tick)] {
&self.assignments
}
}
impl From<crate::approval_db::v1::TrancheEntry> for TrancheEntry {
fn from(entry: crate::approval_db::v1::TrancheEntry) -> Self {
TrancheEntry {
tranche: entry.tranche,
assignments: entry.assignments.into_iter().map(|(v, t)| (v, t.into())).collect(),
}
}
}
impl From<TrancheEntry> for crate::approval_db::v1::TrancheEntry {
fn from(entry: TrancheEntry) -> Self {
Self {
tranche: entry.tranche,
<