Skip to content
Snippets Groups Projects
Commit c786fb21 authored by Peter Goodspeed-Niklaus's avatar Peter Goodspeed-Niklaus Committed by GitHub
Browse files

Refactor election solution trimming for efficiency (#8614)

* Refactor election solution trimming for efficiency

The previous version always trimmed the `CompactOf<T>` instance,
which was intrinsically inefficient: that's a packed data structure,
which is naturally expensive to edit. It's much easier to edit
the unpacked data structures: the `voters` and `assignments` lists.

* rework length-trim tests to work with the new interface

Test suite now compiles. Tests still don't pass because the macro
generating the compact structure still generates `unimplemented!()`
for the actual `compact_length_of` implementation.

* simplify

* add a fuzzer which can validate `Compact::encoded_size_for`

The `Compact` solution type is generated distinctly for each runtime,
and has both three type parameters and a built-in limit to the number
of candidates that each voter can vote for. Finally, they have an
optional `#[compact]` attribute which changes the encoding behavior.

The assignment truncation algorithm we're using depends on the ability
to efficiently and accurately determine how much space a `Compact`
solution will take once encoded.

Together, these two facts imply that simple unit tests are not
sufficient to validate the behavior of `Compact::encoded_size_for`.
This commit adds such a fuzzer. It is designed such that it is possible
to add a new fuzzer to the family by simply adjusting the
`generate_solution_type` macro invocation as desired, and making a
few minor documentation edits.

Of course, the fuzzer still fails for now: the generated implementation
for `encoded_size_for` is still `unimplemented!()`. However, once
the macro is updated appropriately, this fuzzer family should allow
us to gain confidence in the correctness of the generated code.

* Revert "add a fuzzer which can validate `Compact::encoded_size_for`"

This reverts commit 916038790887e64217c6a46e9a6d281386762bfb.

The design of `Compact::encoded_size_for` is flawed. When `#[compact]`
mode is enabled, every integer in the dataset is encoded using run-
length encoding. This means that it is impossible to compute the final
length faster than actually encoding the data structure, because the
encoded length of every field varies with the actual value stored.

Given that we won't be adding that method to the trait, we won't be
needing a fuzzer to validate its performance.

* revert changes to `trait CompactSolution`

If `CompactSolution::encoded_size_for` can't be implemented in the
way that we wanted, there's no point in adding it.

* WIP: restructure trim_assignments_length by actually encoding

This is not as efficient as what we'd hoped for, but it should still
be better than what it's replacing. Overall efficiency of
`fn trim_assignments_length` is now `O(edges * lg assignments.len())`.

* fix compiler errors

* don't sort voters, just assignments

Sorting the `voters` list causes lots of problems; an invariant that
we need to maintain is that an index into the voters list has a stable
meaning.

Luckily, it turns out that there is no need for the assignments list
to correspond to the voters list. That isn't an invariant, though previously
I'd thought that it was.

This simplifies things; we can just leave the voters list alone,
and sort the assignments list the way that is convenient.

* WIP: add `IndexAssignment` type to speed up repeatedly creating `Compact`

Next up: `impl<'a, T> From<&'a [IndexAssignmentOf<T>]> for Compact`,
in the proc-macro which makes `Compact`. Should be a pretty straightforward
adaptation of `from_assignment`.

* Add IndexAssignment and conversion method to CompactSolution

This involves a bit of duplication of types from
`election-provider-multi-phase`; we'll clean those up shortly.

I'm not entirely happy that we had to add a `from_index_assignments`
method to `CompactSolution`, but we couldn't define
`trait CompactSolution: TryFrom<&'a [Self::IndexAssignment]` because
that made trait lookup recursive, and I didn't want to propagate
`CompactSolutionOf<T> + TryFrom<&[IndexAssignmentOf<T>]>` everywhere
that compact solutions are specified.

* use `CompactSolution::from_index_assignment` and clean up dead code

* get rid of `from_index_assignments` in favor of `TryFrom`

* cause `pallet-election-provider-multi-phase` tests to compile successfully

Mostly that's just updating the various test functions to keep track of
refactorings elsewhere, though in a few places we needed to refactor some
test-only helpers as well.

* fix infinite binary search loop

Turns out that moving `low` and `high` into an averager function is a
bad idea, because the averager gets copies of those values, which
of course are never updated. Can't use mutable references, because
we want to read them elsewhere in the code. Just compute the average
directly; life is better that way.

* fix a test failure

* fix the rest of test failures

* remove unguarded subtraction

* fix npos-elections tests compilation

* ensure we use sp_std::vec::Vec in assignments

* add IndexAssignmentOf to sp_npos_elections

* move miner types to `unsigned`

* use stable sort

* rewrap some long comments

* use existing cache instead of building a dedicated stake map

* generalize the TryFrom bound on CompactSolution

* undo adding sp-core dependency

* consume assignments to produce index_assignments

* Add a test of Assignment -> IndexAssignment -> Compact

* fix `IndexAssignmentOf` doc

* move compact test from sp-npos-elections-compact to sp-npos-elections

This means that we can put the mocking parts of that into a proper
mock package, put the test into a test package among other tests.

Having the mocking parts in a mock package enables us to create a
benchmark (which is treated as a separate crate) import them.

* rename assignments -> sorted_assignments

* sort after reducing to avoid potential re-sort issues

* add runtime benchmark, fix critical binary search error

"Why don't you add a benchmark?", he said. "It'll be good practice,
and can help demonstrate that this isn't blowing up the runtime."

He was absolutely right.

The biggest discovery is that adding a parametric benchmark means that
you get a bunch of new test cases, for free. This is excellent, because
those test cases uncovered a binary search bug. Fixing that simplified
that part of the code nicely.

The other nice thing you get from a parametric benchmark is data about
what each parameter does. In this case, `f` is the size factor: what
percent of the votes (by size) should be removed. 0 means that we should
keep everything, 95 means that we should trim down to 5% of original size
or less.

```
Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=     3846
    + v    0.015
    + t        0
    + a    0.192
    + d        0
    + f        0
              µs

Min Squares Analysis
========
-- Extrinsic Time --

Data points distribution:
    v     t     a     d     f   mean µs  sigma µs       %
<snip>
 6000  1600  3000   800     0      4385     75.87    1.7%
 6000  1600  3000   800     9      4089     46.28    1.1%
 6000  1600  3000   800    18      3793     36.45    0.9%
 6000  1600  3000   800    27      3365     41.13    1.2%
 6000  1600  3000   800    36      3096     7.498    0.2%
 6000  1600  3000   800    45      2774     17.96    0.6%
 6000  1600  3000   800    54      2057     37.94    1.8%
 6000  1600  3000   800    63      1885     2.515    0.1%
 6000  1600  3000   800    72      1591     3.203    0.2%
 6000  1600  3000   800    81      1219     25.72    2.1%
 6000  1600  3000   800    90       859     5.295    0.6%
 6000  1600  3000   800    95     684.6     2.969    0.4%

Quality and confidence:
param     error
v         0.008
t         0.029
a         0.008
d         0.044
f         0.185

Model:
Time ~=     3957
    + v    0.009
    + t        0
    + a    0.185
    + d        0
    + f        0
              µs
```

What's nice about this is the clear negative correlation between
amount removed and total time. The more we remove, the less total
time things take.
parent 5a1cc8cf
Branches
No related merge requests found
Showing
with 871 additions and 353 deletions
......@@ -20,7 +20,7 @@ log = { version = "0.4.14", default-features = false }
frame-support = { version = "3.0.0", default-features = false, path = "../support" }
frame-system = { version = "3.0.0", default-features = false, path = "../system" }
sp-io ={ version = "3.0.0", default-features = false, path = "../../primitives/io" }
sp-io = { version = "3.0.0", default-features = false, path = "../../primitives/io" }
sp-std = { version = "3.0.0", default-features = false, path = "../../primitives/std" }
sp-runtime = { version = "3.0.0", default-features = false, path = "../../primitives/runtime" }
sp-npos-elections = { version = "3.0.0", default-features = false, path = "../../primitives/npos-elections" }
......@@ -37,8 +37,9 @@ parking_lot = "0.11.0"
rand = { version = "0.7.3" }
hex-literal = "0.3.1"
substrate-test-utils = { version = "3.0.0", path = "../../test-utils" }
sp-core = { version = "3.0.0", default-features = false, path = "../../primitives/core" }
sp-io = { version = "3.0.0", path = "../../primitives/io" }
sp-core = { version = "3.0.0", path = "../../primitives/core" }
sp-npos-elections = { version = "3.0.0", default-features = false, features = [ "mocks" ], path = "../../primitives/npos-elections" }
sp-tracing = { version = "3.0.0", path = "../../primitives/tracing" }
frame-election-provider-support = { version = "3.0.0", features = ["runtime-benchmarks"], path = "../election-provider-support" }
pallet-balances = { version = "3.0.0", path = "../balances" }
......@@ -64,5 +65,6 @@ std = [
runtime-benchmarks = [
"frame-benchmarking",
"rand",
"sp-npos-elections/mocks",
]
try-runtime = ["frame-support/try-runtime"]
......@@ -18,15 +18,16 @@
//! Two phase election pallet benchmarking.
use super::*;
use crate::Pallet as MultiPhase;
use crate::{Pallet as MultiPhase, unsigned::IndexAssignmentOf};
use frame_benchmarking::impl_benchmark_test_suite;
use frame_support::{assert_ok, traits::OnInitialize};
use frame_system::RawOrigin;
use rand::{prelude::SliceRandom, rngs::SmallRng, SeedableRng};
use frame_election_provider_support::Assignment;
use sp_arithmetic::traits::One;
use sp_arithmetic::{per_things::Percent, traits::One};
use sp_npos_elections::IndexAssignment;
use sp_runtime::InnerOf;
use sp_std::convert::TryInto;
use sp_std::convert::{TryFrom, TryInto};
const SEED: u32 = 999;
......@@ -135,7 +136,7 @@ fn solution_with_size<T: Config>(
.collect::<Vec<_>>();
let compact =
<CompactOf<T>>::from_assignment(assignments, &voter_index, &target_index).unwrap();
<CompactOf<T>>::from_assignment(&assignments, &voter_index, &target_index).unwrap();
let score = compact.clone().score(&winners, stake_of, voter_at, target_at).unwrap();
let round = <MultiPhase<T>>::round();
......@@ -254,6 +255,69 @@ frame_benchmarking::benchmarks! {
assert!(<MultiPhase<T>>::queued_solution().is_some());
}
#[extra]
trim_assignments_length {
// number of votes in snapshot.
let v in (T::BenchmarkingConfig::VOTERS[0]) .. T::BenchmarkingConfig::VOTERS[1];
// number of targets in snapshot.
let t in (T::BenchmarkingConfig::TARGETS[0]) .. T::BenchmarkingConfig::TARGETS[1];
// number of assignments, i.e. compact.len(). This means the active nominators, thus must be
// a subset of `v` component.
let a in (T::BenchmarkingConfig::ACTIVE_VOTERS[0]) .. T::BenchmarkingConfig::ACTIVE_VOTERS[1];
// number of desired targets. Must be a subset of `t` component.
let d in (T::BenchmarkingConfig::DESIRED_TARGETS[0]) .. T::BenchmarkingConfig::DESIRED_TARGETS[1];
// Subtract this percentage from the actual encoded size
let f in 0 .. 95;
// Compute a random solution, then work backwards to get the lists of voters, targets, and assignments
let witness = SolutionOrSnapshotSize { voters: v, targets: t };
let RawSolution { compact, .. } = solution_with_size::<T>(witness, a, d);
let RoundSnapshot { voters, targets } = MultiPhase::<T>::snapshot().unwrap();
let voter_at = helpers::voter_at_fn::<T>(&voters);
let target_at = helpers::target_at_fn::<T>(&targets);
let mut assignments = compact.into_assignment(voter_at, target_at).unwrap();
// make a voter cache and some helper functions for access
let cache = helpers::generate_voter_cache::<T>(&voters);
let voter_index = helpers::voter_index_fn::<T>(&cache);
let target_index = helpers::target_index_fn::<T>(&targets);
// sort assignments by decreasing voter stake
assignments.sort_by_key(|crate::unsigned::Assignment::<T> { who, .. }| {
let stake = cache.get(&who).map(|idx| {
let (_, stake, _) = voters[*idx];
stake
}).unwrap_or_default();
sp_std::cmp::Reverse(stake)
});
let mut index_assignments = assignments
.into_iter()
.map(|assignment| IndexAssignment::new(&assignment, &voter_index, &target_index))
.collect::<Result<Vec<_>, _>>()
.unwrap();
let encoded_size_of = |assignments: &[IndexAssignmentOf<T>]| {
CompactOf::<T>::try_from(assignments).map(|compact| compact.encoded_size())
};
let desired_size = Percent::from_percent(100 - f.saturated_into::<u8>())
.mul_ceil(encoded_size_of(index_assignments.as_slice()).unwrap());
log!(trace, "desired_size = {}", desired_size);
}: {
MultiPhase::<T>::trim_assignments_length(
desired_size.saturated_into(),
&mut index_assignments,
&encoded_size_of,
).unwrap();
} verify {
let compact = CompactOf::<T>::try_from(index_assignments.as_slice()).unwrap();
let encoding = compact.encode();
log!(trace, "encoded size prediction = {}", encoded_size_of(index_assignments.as_slice()).unwrap());
log!(trace, "actual encoded size = {}", encoding.len());
assert!(encoding.len() <= desired_size);
}
// This is checking a valid solution. The worse case is indeed a valid solution.
feasibility_check {
// number of votes in snapshot.
......
......@@ -62,6 +62,18 @@ pub fn voter_index_fn<T: Config>(
}
}
/// Create a function that returns the index of a voter in the snapshot.
///
/// Same as [`voter_index_fn`] but the returned function owns all its necessary data; nothing is
/// borrowed.
pub fn voter_index_fn_owned<T: Config>(
cache: BTreeMap<T::AccountId, usize>,
) -> impl Fn(&T::AccountId) -> Option<CompactVoterIndexOf<T>> {
move |who| {
cache.get(who).and_then(|i| <usize as TryInto<CompactVoterIndexOf<T>>>::try_into(*i).ok())
}
}
/// Same as [`voter_index_fn`], but the returning index is converted into usize, if possible.
///
/// ## Warning
......
......@@ -17,6 +17,7 @@
use super::*;
use crate as multi_phase;
use multi_phase::unsigned::{IndexAssignmentOf, Voter};
pub use frame_support::{assert_noop, assert_ok};
use frame_support::{
parameter_types,
......@@ -41,7 +42,7 @@ use sp_runtime::{
traits::{BlakeTwo256, IdentityLookup},
PerU16,
};
use std::sync::Arc;
use std::{convert::TryFrom, sync::Arc};
pub type Block = sp_runtime::generic::Block<Header, UncheckedExtrinsic>;
pub type UncheckedExtrinsic = sp_runtime::generic::UncheckedExtrinsic<AccountId, Call, (), ()>;
......@@ -95,6 +96,63 @@ pub fn roll_to_with_ocw(n: u64) {
}
}
pub struct TrimHelpers {
pub voters: Vec<Voter<Runtime>>,
pub assignments: Vec<IndexAssignmentOf<Runtime>>,
pub encoded_size_of:
Box<dyn Fn(&[IndexAssignmentOf<Runtime>]) -> Result<usize, sp_npos_elections::Error>>,
pub voter_index: Box<
dyn Fn(
&<Runtime as frame_system::Config>::AccountId,
) -> Option<CompactVoterIndexOf<Runtime>>,
>,
}
/// Helpers for setting up trimming tests.
///
/// Assignments are pre-sorted in reverse order of stake.
pub fn trim_helpers() -> TrimHelpers {
let RoundSnapshot { voters, targets } = MultiPhase::snapshot().unwrap();
let stakes: std::collections::HashMap<_, _> =
voters.iter().map(|(id, stake, _)| (*id, *stake)).collect();
// Compute the size of a compact solution comprised of the selected arguments.
//
// This function completes in `O(edges)`; it's expensive, but linear.
let encoded_size_of = Box::new(|assignments: &[IndexAssignmentOf<Runtime>]| {
CompactOf::<Runtime>::try_from(assignments).map(|compact| compact.encoded_size())
});
let cache = helpers::generate_voter_cache::<Runtime>(&voters);
let voter_index = helpers::voter_index_fn_owned::<Runtime>(cache);
let target_index = helpers::target_index_fn::<Runtime>(&targets);
let desired_targets = MultiPhase::desired_targets().unwrap();
let ElectionResult { mut assignments, .. } = seq_phragmen::<_, CompactAccuracyOf<Runtime>>(
desired_targets as usize,
targets.clone(),
voters.clone(),
None,
)
.unwrap();
// sort by decreasing order of stake
assignments.sort_unstable_by_key(|assignment| {
std::cmp::Reverse(stakes.get(&assignment.who).cloned().unwrap_or_default())
});
// convert to IndexAssignment
let assignments = assignments
.iter()
.map(|assignment| {
IndexAssignmentOf::<Runtime>::new(assignment, &voter_index, &target_index)
})
.collect::<Result<Vec<_>, _>>()
.expect("test assignments don't contain any voters with too many votes");
TrimHelpers { voters, assignments, encoded_size_of, voter_index: Box::new(voter_index) }
}
/// Spit out a verifiable raw solution.
///
/// This is a good example of what an offchain miner would do.
......@@ -102,12 +160,6 @@ pub fn raw_solution() -> RawSolution<CompactOf<Runtime>> {
let RoundSnapshot { voters, targets } = MultiPhase::snapshot().unwrap();
let desired_targets = MultiPhase::desired_targets().unwrap();
// closures
let cache = helpers::generate_voter_cache::<Runtime>(&voters);
let voter_index = helpers::voter_index_fn_linear::<Runtime>(&voters);
let target_index = helpers::target_index_fn_linear::<Runtime>(&targets);
let stake_of = helpers::stake_of_fn::<Runtime>(&voters, &cache);
let ElectionResult { winners, assignments } = seq_phragmen::<_, CompactAccuracyOf<Runtime>>(
desired_targets as usize,
targets.clone(),
......@@ -116,6 +168,12 @@ pub fn raw_solution() -> RawSolution<CompactOf<Runtime>> {
)
.unwrap();
// closures
let cache = helpers::generate_voter_cache::<Runtime>(&voters);
let voter_index = helpers::voter_index_fn_linear::<Runtime>(&voters);
let target_index = helpers::target_index_fn_linear::<Runtime>(&targets);
let stake_of = helpers::stake_of_fn::<Runtime>(&voters, &cache);
let winners = to_without_backing(winners);
let score = {
......@@ -123,7 +181,7 @@ pub fn raw_solution() -> RawSolution<CompactOf<Runtime>> {
to_supports(&winners, &staked).unwrap().evaluate()
};
let compact =
<CompactOf<Runtime>>::from_assignment(assignments, &voter_index, &target_index).unwrap();
<CompactOf<Runtime>>::from_assignment(&assignments, &voter_index, &target_index).unwrap();
let round = MultiPhase::round();
RawSolution { compact, score, round }
......
......@@ -25,7 +25,7 @@ use sp_npos_elections::{
assignment_staked_to_ratio_normalized,
};
use sp_runtime::{offchain::storage::StorageValueRef, traits::TrailingZeroInput};
use sp_std::cmp::Ordering;
use sp_std::{cmp::Ordering, convert::TryFrom};
/// Storage key used to store the persistent offchain worker status.
pub(crate) const OFFCHAIN_HEAD_DB: &[u8] = b"parity/multi-phase-unsigned-election";
......@@ -34,6 +34,23 @@ pub(crate) const OFFCHAIN_HEAD_DB: &[u8] = b"parity/multi-phase-unsigned-electio
/// within a window of 5 blocks.
pub(crate) const OFFCHAIN_REPEAT: u32 = 5;
/// A voter's fundamental data: their ID, their stake, and the list of candidates for whom they
/// voted.
pub type Voter<T> = (
<T as frame_system::Config>::AccountId,
sp_npos_elections::VoteWeight,
Vec<<T as frame_system::Config>::AccountId>,
);
/// The relative distribution of a voter's stake among the winning targets.
pub type Assignment<T> = sp_npos_elections::Assignment<
<T as frame_system::Config>::AccountId,
CompactAccuracyOf<T>,
>;
/// The [`IndexAssignment`][sp_npos_elections::IndexAssignment] type specialized for a particular runtime `T`.
pub type IndexAssignmentOf<T> = sp_npos_elections::IndexAssignmentOf<CompactOf<T>>;
#[derive(Debug, Eq, PartialEq)]
pub enum MinerError {
/// An internal error in the NPoS elections crate.
......@@ -144,7 +161,7 @@ impl<T: Config> Pallet<T> {
Self::snapshot().ok_or(MinerError::SnapshotUnAvailable)?;
let desired_targets = Self::desired_targets().ok_or(MinerError::SnapshotUnAvailable)?;
// closures.
// now make some helper closures.
let cache = helpers::generate_voter_cache::<T>(&voters);
let voter_index = helpers::voter_index_fn::<T>(&cache);
let target_index = helpers::target_index_fn::<T>(&targets);
......@@ -152,41 +169,71 @@ impl<T: Config> Pallet<T> {
let target_at = helpers::target_at_fn::<T>(&targets);
let stake_of = helpers::stake_of_fn::<T>(&voters, &cache);
// Compute the size of a compact solution comprised of the selected arguments.
//
// This function completes in `O(edges)`; it's expensive, but linear.
let encoded_size_of = |assignments: &[IndexAssignmentOf<T>]| {
CompactOf::<T>::try_from(assignments).map(|compact| compact.encoded_size())
};
let ElectionResult { assignments, winners } = election_result;
// convert to staked and reduce.
let mut staked = assignment_ratio_to_staked_normalized(assignments, &stake_of)
.map_err::<MinerError, _>(Into::into)?;
sp_npos_elections::reduce(&mut staked);
// Reduce (requires round-trip to staked form)
let sorted_assignments = {
// convert to staked and reduce.
let mut staked = assignment_ratio_to_staked_normalized(assignments, &stake_of)?;
// we reduce before sorting in order to ensure that the reduction process doesn't
// accidentally change the sort order
sp_npos_elections::reduce(&mut staked);
// Sort the assignments by reversed voter stake. This ensures that we can efficiently
// truncate the list.
staked.sort_by_key(
|sp_npos_elections::StakedAssignment::<T::AccountId> { who, .. }| {
// though staked assignments are expressed in terms of absolute stake, we'd
// still need to iterate over all votes in order to actually compute the total
// stake. it should be faster to look it up from the cache.
let stake = cache
.get(who)
.map(|idx| {
let (_, stake, _) = voters[*idx];
stake
})
.unwrap_or_default();
sp_std::cmp::Reverse(stake)
},
);
// convert back.
assignment_staked_to_ratio_normalized(staked)?
};
// convert back to ration and make compact.
let ratio = assignment_staked_to_ratio_normalized(staked)?;
let compact = <CompactOf<T>>::from_assignment(ratio, &voter_index, &target_index)?;
// convert to `IndexAssignment`. This improves the runtime complexity of repeatedly
// converting to `Compact`.
let mut index_assignments = sorted_assignments
.into_iter()
.map(|assignment| IndexAssignmentOf::<T>::new(&assignment, &voter_index, &target_index))
.collect::<Result<Vec<_>, _>>()?;
// trim assignments list for weight and length.
let size =
SolutionOrSnapshotSize { voters: voters.len() as u32, targets: targets.len() as u32 };
let maximum_allowed_voters = Self::maximum_voter_for_weight::<T::WeightInfo>(
Self::trim_assignments_weight(
desired_targets,
size,
T::MinerMaxWeight::get(),
&mut index_assignments,
);
log!(
debug,
"initial solution voters = {}, snapshot = {:?}, maximum_allowed(capped) = {}",
compact.voter_count(),
size,
maximum_allowed_voters,
);
// trim length and weight
let compact = Self::trim_compact_weight(maximum_allowed_voters, compact, &voter_index)?;
let compact = Self::trim_compact_length(
Self::trim_assignments_length(
T::MinerMaxLength::get(),
compact,
&voter_index,
&mut index_assignments,
&encoded_size_of,
)?;
// now make compact.
let compact = CompactOf::<T>::try_from(&index_assignments)?;
// re-calc score.
let winners = sp_npos_elections::to_without_backing(winners);
let score = compact.clone().score(&winners, stake_of, voter_at, target_at)?;
......@@ -212,15 +259,14 @@ impl<T: Config> Pallet<T> {
}
}
/// Greedily reduce the size of the a solution to fit into the block, w.r.t. weight.
/// Greedily reduce the size of the solution to fit into the block w.r.t. weight.
///
/// The weight of the solution is foremost a function of the number of voters (i.e.
/// `compact.len()`). Aside from this, the other components of the weight are invariant. The
/// `assignments.len()`). Aside from this, the other components of the weight are invariant. The
/// number of winners shall not be changed (otherwise the solution is invalid) and the
/// `ElectionSize` is merely a representation of the total number of stakers.
///
/// Thus, we reside to stripping away some voters. This means only changing the `compact`
/// struct.
/// Thus, we reside to stripping away some voters from the `assignments`.
///
/// Note that the solution is already computed, and the winners are elected based on the merit
/// of the entire stake in the system. Nonetheless, some of the voters will be removed further
......@@ -228,50 +274,24 @@ impl<T: Config> Pallet<T> {
///
/// Indeed, the score must be computed **after** this step. If this step reduces the score too
/// much or remove a winner, then the solution must be discarded **after** this step.
pub fn trim_compact_weight<FN>(
maximum_allowed_voters: u32,
mut compact: CompactOf<T>,
voter_index: FN,
) -> Result<CompactOf<T>, MinerError>
where
for<'r> FN: Fn(&'r T::AccountId) -> Option<CompactVoterIndexOf<T>>,
{
match compact.voter_count().checked_sub(maximum_allowed_voters as usize) {
Some(to_remove) if to_remove > 0 => {
// grab all voters and sort them by least stake.
let RoundSnapshot { voters, .. } =
Self::snapshot().ok_or(MinerError::SnapshotUnAvailable)?;
let mut voters_sorted = voters
.into_iter()
.map(|(who, stake, _)| (who.clone(), stake))
.collect::<Vec<_>>();
voters_sorted.sort_by_key(|(_, y)| *y);
// start removing from the least stake. Iterate until we know enough have been
// removed.
let mut removed = 0;
for (maybe_index, _stake) in
voters_sorted.iter().map(|(who, stake)| (voter_index(&who), stake))
{
let index = maybe_index.ok_or(MinerError::SnapshotUnAvailable)?;
if compact.remove_voter(index) {
removed += 1
}
if removed >= to_remove {
break;
}
}
log!(debug, "removed {} voter to meet the max weight limit.", to_remove);
Ok(compact)
}
_ => {
// nada, return as-is
log!(debug, "didn't remove any voter for weight limits.");
Ok(compact)
}
}
fn trim_assignments_weight(
desired_targets: u32,
size: SolutionOrSnapshotSize,
max_weight: Weight,
assignments: &mut Vec<IndexAssignmentOf<T>>,
) {
let maximum_allowed_voters = Self::maximum_voter_for_weight::<T::WeightInfo>(
desired_targets,
size,
max_weight,
);
let removing: usize = assignments.len().saturating_sub(maximum_allowed_voters.saturated_into());
log!(
debug,
"from {} assignments, truncating to {} for weight, removing {}",
assignments.len(), maximum_allowed_voters, removing,
);
assignments.truncate(maximum_allowed_voters as usize);
}
/// Greedily reduce the size of the solution to fit into the block w.r.t length.
......@@ -283,39 +303,62 @@ impl<T: Config> Pallet<T> {
/// the total stake in the system. Nevertheless, some of the voters may be removed here.
///
/// Sometimes, removing a voter can cause a validator to also be implicitly removed, if
/// that voter was the only backer of that winner. In such cases, this solution is invalid, which
/// will be caught prior to submission.
/// that voter was the only backer of that winner. In such cases, this solution is invalid,
/// which will be caught prior to submission.
///
/// The score must be computed **after** this step. If this step reduces the score too much,
/// then the solution must be discarded.
pub fn trim_compact_length(
pub(crate) fn trim_assignments_length(
max_allowed_length: u32,
mut compact: CompactOf<T>,
voter_index: impl Fn(&T::AccountId) -> Option<CompactVoterIndexOf<T>>,
) -> Result<CompactOf<T>, MinerError> {
// short-circuit to avoid getting the voters if possible
// this involves a redundant encoding, but that should hopefully be relatively cheap
if (compact.encoded_size().saturated_into::<u32>()) <= max_allowed_length {
return Ok(compact);
assignments: &mut Vec<IndexAssignmentOf<T>>,
encoded_size_of: impl Fn(&[IndexAssignmentOf<T>]) -> Result<usize, sp_npos_elections::Error>,
) -> Result<(), MinerError> {
// Perform a binary search for the max subset of which can fit into the allowed
// length. Having discovered that, we can truncate efficiently.
let max_allowed_length: usize = max_allowed_length.saturated_into();
let mut high = assignments.len();
let mut low = 0;
while high - low > 1 {
let test = (high + low) / 2;
if encoded_size_of(&assignments[..test])? <= max_allowed_length {
low = test;
} else {
high = test;
}
}
let maximum_allowed_voters =
if encoded_size_of(&assignments[..low + 1])? <= max_allowed_length {
low + 1
} else {
low
};
// ensure our postconditions are correct
debug_assert!(
encoded_size_of(&assignments[..maximum_allowed_voters]).unwrap() <= max_allowed_length
);
debug_assert!(if maximum_allowed_voters < assignments.len() {
encoded_size_of(&assignments[..maximum_allowed_voters + 1]).unwrap()
> max_allowed_length
} else {
true
});
// grab all voters and sort them by least stake.
let RoundSnapshot { voters, .. } =
Self::snapshot().ok_or(MinerError::SnapshotUnAvailable)?;
let mut voters_sorted = voters
.into_iter()
.map(|(who, stake, _)| (who.clone(), stake))
.collect::<Vec<_>>();
voters_sorted.sort_by_key(|(_, y)| *y);
voters_sorted.reverse();
while compact.encoded_size() > max_allowed_length.saturated_into() {
let (smallest_stake_voter, _) = voters_sorted.pop().ok_or(MinerError::NoMoreVoters)?;
let index = voter_index(&smallest_stake_voter).ok_or(MinerError::SnapshotUnAvailable)?;
compact.remove_voter(index);
}
// NOTE: before this point, every access was immutable.
// after this point, we never error.
// check before edit.
Ok(compact)
log!(
debug,
"from {} assignments, truncating to {} for length, removing {}",
assignments.len(),
maximum_allowed_voters,
assignments.len().saturating_sub(maximum_allowed_voters),
);
assignments.truncate(maximum_allowed_voters);
Ok(())
}
/// Find the maximum `len` that a compact can have in order to fit into the block weight.
......@@ -552,16 +595,20 @@ mod max_weight {
#[cfg(test)]
mod tests {
use super::{
mock::{Origin, *},
Call, *,
use super::*;
use crate::{
mock::{
assert_noop, assert_ok, ExtBuilder, Extrinsic, MinerMaxWeight, MultiPhase, Origin,
roll_to_with_ocw, roll_to, Runtime, TestCompact, TrimHelpers, trim_helpers, witness,
},
};
use frame_support::{dispatch::Dispatchable, traits::OffchainWorker};
use helpers::voter_index_fn_linear;
use mock::Call as OuterCall;
use frame_election_provider_support::Assignment;
use sp_npos_elections::IndexAssignment;
use sp_runtime::{traits::ValidateUnsigned, PerU16};
type Assignment = crate::unsigned::Assignment<Runtime>;
#[test]
fn validate_unsigned_retracts_wrong_phase() {
ExtBuilder::default().desired_targets(0).build_and_execute(|| {
......@@ -943,88 +990,86 @@ mod tests {
}
#[test]
fn trim_compact_length_does_not_modify_when_short_enough() {
fn trim_assignments_length_does_not_modify_when_short_enough() {
let mut ext = ExtBuilder::default().build();
ext.execute_with(|| {
roll_to(25);
// given
let RoundSnapshot { voters, ..} = MultiPhase::snapshot().unwrap();
let RawSolution { mut compact, .. } = raw_solution();
let encoded_len = compact.encode().len() as u32;
let TrimHelpers {
mut assignments,
encoded_size_of,
..
} = trim_helpers();
let compact = CompactOf::<Runtime>::try_from(assignments.as_slice()).unwrap();
let encoded_len = compact.encoded_size() as u32;
let compact_clone = compact.clone();
// when
assert!(encoded_len < <Runtime as Config>::MinerMaxLength::get());
MultiPhase::trim_assignments_length(encoded_len, &mut assignments, encoded_size_of).unwrap();
// then
compact = MultiPhase::trim_compact_length(
encoded_len,
compact,
voter_index_fn_linear::<Runtime>(&voters),
).unwrap();
let compact = CompactOf::<Runtime>::try_from(assignments.as_slice()).unwrap();
assert_eq!(compact, compact_clone);
});
}
#[test]
fn trim_compact_length_modifies_when_too_long() {
fn trim_assignments_length_modifies_when_too_long() {
let mut ext = ExtBuilder::default().build();
ext.execute_with(|| {
roll_to(25);
let RoundSnapshot { voters, ..} =
MultiPhase::snapshot().unwrap();
let RawSolution { mut compact, .. } = raw_solution();
let encoded_len = compact.encoded_size() as u32;
// given
let TrimHelpers {
mut assignments,
encoded_size_of,
..
} = trim_helpers();
let compact = CompactOf::<Runtime>::try_from(assignments.as_slice()).unwrap();
let encoded_len = compact.encoded_size();
let compact_clone = compact.clone();
compact = MultiPhase::trim_compact_length(
encoded_len - 1,
compact,
voter_index_fn_linear::<Runtime>(&voters),
).unwrap();
// when
MultiPhase::trim_assignments_length(encoded_len as u32 - 1, &mut assignments, encoded_size_of).unwrap();
// then
let compact = CompactOf::<Runtime>::try_from(assignments.as_slice()).unwrap();
assert_ne!(compact, compact_clone);
assert!((compact.encoded_size() as u32) < encoded_len);
assert!(compact.encoded_size() < encoded_len);
});
}
#[test]
fn trim_compact_length_trims_lowest_stake() {
fn trim_assignments_length_trims_lowest_stake() {
let mut ext = ExtBuilder::default().build();
ext.execute_with(|| {
roll_to(25);
let RoundSnapshot { voters, ..} =
MultiPhase::snapshot().unwrap();
let RawSolution { mut compact, .. } = raw_solution();
// given
let TrimHelpers {
voters,
mut assignments,
encoded_size_of,
voter_index,
} = trim_helpers();
let compact = CompactOf::<Runtime>::try_from(assignments.as_slice()).unwrap();
let encoded_len = compact.encoded_size() as u32;
let voter_count = compact.voter_count();
let count = assignments.len();
let min_stake_voter = voters.iter()
.map(|(id, weight, _)| (weight, id))
.min()
.map(|(_, id)| id)
.and_then(|(_, id)| voter_index(id))
.unwrap();
// when
MultiPhase::trim_assignments_length(encoded_len - 1, &mut assignments, encoded_size_of).unwrap();
compact = MultiPhase::trim_compact_length(
encoded_len - 1,
compact,
voter_index_fn_linear::<Runtime>(&voters),
).unwrap();
assert_eq!(compact.voter_count(), voter_count - 1, "we must have removed exactly 1 voter");
let assignments = compact.into_assignment(
|voter| Some(voter as AccountId),
|target| Some(target as AccountId),
).unwrap();
// then
assert_eq!(assignments.len(), count - 1, "we must have removed exactly one assignment");
assert!(
assignments.iter()
.all(|Assignment{ who, ..}| who != min_stake_voter),
.all(|IndexAssignment{ who, ..}| *who != min_stake_voter),
"min_stake_voter must no longer be in the set of voters",
);
});
......
......@@ -28,6 +28,7 @@ sp-runtime = { version = "3.0.0", path = "../runtime" }
[features]
default = ["std"]
bench = []
mocks = []
std = [
"codec/std",
"serde",
......
// This file is part of Substrate.
// Copyright (C) 2020-2021 Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: Apache-2.0
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Code generation for getting the compact representation from the `IndexAssignment` type.
use crate::field_name_for;
use proc_macro2::TokenStream as TokenStream2;
use quote::quote;
pub(crate) fn from_impl(count: usize) -> TokenStream2 {
let from_impl_single = {
let name = field_name_for(1);
quote!(1 => compact.#name.push(
(
*who,
distribution[0].0,
)
),)
};
let from_impl_double = {
let name = field_name_for(2);
quote!(2 => compact.#name.push(
(
*who,
(
distribution[0].0,
distribution[0].1,
),
distribution[1].0,
)
),)
};
let from_impl_rest = (3..=count)
.map(|c| {
let inner = (0..c - 1)
.map(|i| quote!((distribution[#i].0, distribution[#i].1),))
.collect::<TokenStream2>();
let field_name = field_name_for(c);
let last_index = c - 1;
let last = quote!(distribution[#last_index].0);
quote!(
#c => compact.#field_name.push(
(
*who,
[#inner],
#last,
)
),
)
})
.collect::<TokenStream2>();
quote!(
#from_impl_single
#from_impl_double
#from_impl_rest
)
}
......@@ -25,6 +25,7 @@ use syn::parse::{Parse, ParseStream, Result};
mod assignment;
mod codec;
mod index_assignment;
// prefix used for struct fields in compact.
const PREFIX: &'static str = "votes";
......@@ -177,6 +178,7 @@ fn struct_def(
let from_impl = assignment::from_impl(count);
let into_impl = assignment::into_impl(count, weight_type.clone());
let from_index_impl = index_assignment::from_impl(count);
Ok(quote! (
/// A struct to encode a election assignment in a compact way.
......@@ -223,7 +225,7 @@ fn struct_def(
}
fn from_assignment<FV, FT, A>(
assignments: _npos::sp_std::prelude::Vec<_npos::Assignment<A, #weight_type>>,
assignments: &[_npos::Assignment<A, #weight_type>],
index_of_voter: FV,
index_of_target: FT,
) -> Result<Self, _npos::Error>
......@@ -256,6 +258,29 @@ fn struct_def(
Ok(assignments)
}
}
type __IndexAssignment = _npos::IndexAssignment<
<#ident as _npos::CompactSolution>::Voter,
<#ident as _npos::CompactSolution>::Target,
<#ident as _npos::CompactSolution>::Accuracy,
>;
impl<'a> _npos::sp_std::convert::TryFrom<&'a [__IndexAssignment]> for #ident {
type Error = _npos::Error;
fn try_from(index_assignments: &'a [__IndexAssignment]) -> Result<Self, Self::Error> {
let mut compact = #ident::default();
for _npos::IndexAssignment { who, distribution } in index_assignments {
match distribution.len() {
0 => {}
#from_index_impl
_ => {
return Err(_npos::Error::CompactTargetOverflow);
}
}
};
Ok(compact)
}
}
))
}
......
// This file is part of Substrate.
// Copyright (C) 2020-2021 Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: Apache-2.0
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Structs and helpers for distributing a voter's stake among various winners.
use crate::{Error, ExtendedBalance, IdentifierT, PerThing128, __OrInvalidIndex};
use codec::{Encode, Decode};
use sp_arithmetic::{traits::{Bounded, Zero}, Normalizable, PerThing};
use sp_core::RuntimeDebug;
use sp_std::vec::Vec;
/// A voter's stake assignment among a set of targets, represented as ratios.
#[derive(RuntimeDebug, Clone, Default)]
#[cfg_attr(feature = "std", derive(PartialEq, Eq, Encode, Decode))]
pub struct Assignment<AccountId, P: PerThing> {
/// Voter's identifier.
pub who: AccountId,
/// The distribution of the voter's stake.
pub distribution: Vec<(AccountId, P)>,
}
impl<AccountId: IdentifierT, P: PerThing128> Assignment<AccountId, P> {
/// Convert from a ratio assignment into one with absolute values aka. [`StakedAssignment`].
///
/// It needs `stake` which is the total budget of the voter.
///
/// Note that this might create _un-normalized_ assignments, due to accuracy loss of `P`. Call
/// site might compensate by calling `try_normalize()` on the returned `StakedAssignment` as a
/// post-precessing.
///
/// If an edge ratio is [`Bounded::min_value()`], it is dropped. This edge can never mean
/// anything useful.
pub fn into_staked(self, stake: ExtendedBalance) -> StakedAssignment<AccountId> {
let distribution = self
.distribution
.into_iter()
.filter_map(|(target, p)| {
// if this ratio is zero, then skip it.
if p.is_zero() {
None
} else {
// NOTE: this mul impl will always round to the nearest number, so we might both
// overflow and underflow.
let distribution_stake = p * stake;
Some((target, distribution_stake))
}
})
.collect::<Vec<(AccountId, ExtendedBalance)>>();
StakedAssignment {
who: self.who,
distribution,
}
}
/// Try and normalize this assignment.
///
/// If `Ok(())` is returned, then the assignment MUST have been successfully normalized to 100%.
///
/// ### Errors
///
/// This will return only if the internal `normalize` fails. This can happen if sum of
/// `self.distribution.map(|p| p.deconstruct())` fails to fit inside `UpperOf<P>`. A user of
/// this crate may statically assert that this can never happen and safely `expect` this to
/// return `Ok`.
pub fn try_normalize(&mut self) -> Result<(), &'static str> {
self.distribution
.iter()
.map(|(_, p)| *p)
.collect::<Vec<_>>()
.normalize(P::one())
.map(|normalized_ratios|
self.distribution
.iter_mut()
.zip(normalized_ratios)
.for_each(|((_, old), corrected)| { *old = corrected; })
)
}
}
/// A voter's stake assignment among a set of targets, represented as absolute values in the scale
/// of [`ExtendedBalance`].
#[derive(RuntimeDebug, Clone, Default)]
#[cfg_attr(feature = "std", derive(PartialEq, Eq, Encode, Decode))]
pub struct StakedAssignment<AccountId> {
/// Voter's identifier
pub who: AccountId,
/// The distribution of the voter's stake.
pub distribution: Vec<(AccountId, ExtendedBalance)>,
}
impl<AccountId> StakedAssignment<AccountId> {
/// Converts self into the normal [`Assignment`] type.
///
/// NOTE: This will always round down, and thus the results might be less than a full 100% `P`.
/// Use a normalization post-processing to fix this. The data type returned here will
/// potentially get used to create a compact type; a compact type requires sum of ratios to be
/// less than 100% upon un-compacting.
///
/// If an edge stake is so small that it cannot be represented in `T`, it is ignored. This edge
/// can never be re-created and does not mean anything useful anymore.
pub fn into_assignment<P: PerThing>(self) -> Assignment<AccountId, P>
where
AccountId: IdentifierT,
{
let stake = self.total();
let distribution = self.distribution
.into_iter()
.filter_map(|(target, w)| {
let per_thing = P::from_rational(w, stake);
if per_thing == Bounded::min_value() {
None
} else {
Some((target, per_thing))
}
})
.collect::<Vec<(AccountId, P)>>();
Assignment {
who: self.who,
distribution,
}
}
/// Try and normalize this assignment.
///
/// If `Ok(())` is returned, then the assignment MUST have been successfully normalized to
/// `stake`.
///
/// NOTE: current implementation of `.normalize` is almost safe to `expect()` upon. The only
/// error case is when the input cannot fit in `T`, or the sum of input cannot fit in `T`.
/// Sadly, both of these are dependent upon the implementation of `VoteLimit`, i.e. the limit of
/// edges per voter which is enforced from upstream. Hence, at this crate, we prefer returning a
/// result and a use the name prefix `try_`.
pub fn try_normalize(&mut self, stake: ExtendedBalance) -> Result<(), &'static str> {
self.distribution
.iter()
.map(|(_, ref weight)| *weight)
.collect::<Vec<_>>()
.normalize(stake)
.map(|normalized_weights|
self.distribution
.iter_mut()
.zip(normalized_weights.into_iter())
.for_each(|((_, weight), corrected)| { *weight = corrected; })
)
}
/// Get the total stake of this assignment (aka voter budget).
pub fn total(&self) -> ExtendedBalance {
self.distribution.iter().fold(Zero::zero(), |a, b| a.saturating_add(b.1))
}
}
/// The [`IndexAssignment`] type is an intermediate between the assignments list
/// ([`&[Assignment<T>]`][Assignment]) and `CompactOf<T>`.
///
/// The voter and target identifiers have already been replaced with appropriate indices,
/// making it fast to repeatedly encode into a `CompactOf<T>`. This property turns out
/// to be important when trimming for compact length.
#[derive(RuntimeDebug, Clone, Default)]
#[cfg_attr(feature = "std", derive(PartialEq, Eq, Encode, Decode))]
pub struct IndexAssignment<VoterIndex, TargetIndex, P: PerThing> {
/// Index of the voter among the voters list.
pub who: VoterIndex,
/// The distribution of the voter's stake among winning targets.
///
/// Targets are identified by their index in the canonical list.
pub distribution: Vec<(TargetIndex, P)>,
}
impl<VoterIndex, TargetIndex, P: PerThing> IndexAssignment<VoterIndex, TargetIndex, P> {
pub fn new<AccountId: IdentifierT>(
assignment: &Assignment<AccountId, P>,
voter_index: impl Fn(&AccountId) -> Option<VoterIndex>,
target_index: impl Fn(&AccountId) -> Option<TargetIndex>,
) -> Result<Self, Error> {
Ok(Self {
who: voter_index(&assignment.who).or_invalid_index()?,
distribution: assignment
.distribution
.iter()
.map(|(target, proportion)| Some((target_index(target)?, proportion.clone())))
.collect::<Option<Vec<_>>>()
.or_invalid_index()?,
})
}
}
/// A type alias for [`IndexAssignment`] made from [`crate::CompactSolution`].
pub type IndexAssignmentOf<C> = IndexAssignment<
<C as crate::CompactSolution>::Voter,
<C as crate::CompactSolution>::Target,
<C as crate::CompactSolution>::Accuracy,
>;
......@@ -72,10 +72,9 @@ pub fn assignment_staked_to_ratio_normalized<A: IdentifierT, P: PerThing128>(
staked: Vec<StakedAssignment<A>>,
) -> Result<Vec<Assignment<A, P>>, Error> {
let mut ratio = staked.into_iter().map(|a| a.into_assignment()).collect::<Vec<_>>();
ratio
.iter_mut()
.map(|a| a.try_normalize().map_err(|err| Error::ArithmeticError(err)))
.collect::<Result<_, _>>()?;
for assignment in ratio.iter_mut() {
assignment.try_normalize().map_err(|err| Error::ArithmeticError(err))?;
}
Ok(ratio)
}
......
......@@ -99,6 +99,7 @@ mod mock;
#[cfg(test)]
mod tests;
mod assignments;
pub mod phragmen;
pub mod balancing;
pub mod phragmms;
......@@ -107,6 +108,7 @@ pub mod reduce;
pub mod helpers;
pub mod pjr;
pub use assignments::{Assignment, IndexAssignment, StakedAssignment, IndexAssignmentOf};
pub use reduce::reduce;
pub use helpers::*;
pub use phragmen::*;
......@@ -139,7 +141,10 @@ impl<T> __OrInvalidIndex<T> for Option<T> {
/// A common interface for all compact solutions.
///
/// See [`sp-npos-elections-compact`] for more info.
pub trait CompactSolution: Sized {
pub trait CompactSolution
where
Self: Sized + for<'a> sp_std::convert::TryFrom<&'a [IndexAssignmentOf<Self>], Error = Error>,
{
/// The maximum number of votes that are allowed.
const LIMIT: usize;
......@@ -164,9 +169,9 @@ pub trait CompactSolution: Sized {
/// The weight/accuracy type of each vote.
type Accuracy: PerThing128;
/// Build self from a `assignments: Vec<Assignment<A, Self::Accuracy>>`.
/// Build self from a list of assignments.
fn from_assignment<FV, FT, A>(
assignments: Vec<Assignment<A, Self::Accuracy>>,
assignments: &[Assignment<A, Self::Accuracy>],
voter_index: FV,
target_index: FT,
) -> Result<Self, Error>
......@@ -455,149 +460,6 @@ pub struct ElectionResult<AccountId, P: PerThing> {
pub assignments: Vec<Assignment<AccountId, P>>,
}
/// A voter's stake assignment among a set of targets, represented as ratios.
#[derive(RuntimeDebug, Clone, Default)]
#[cfg_attr(feature = "std", derive(PartialEq, Eq, Encode, Decode))]
pub struct Assignment<AccountId, P: PerThing> {
/// Voter's identifier.
pub who: AccountId,
/// The distribution of the voter's stake.
pub distribution: Vec<(AccountId, P)>,
}
impl<AccountId: IdentifierT, P: PerThing128> Assignment<AccountId, P> {
/// Convert from a ratio assignment into one with absolute values aka. [`StakedAssignment`].
///
/// It needs `stake` which is the total budget of the voter.
///
/// Note that this might create _un-normalized_ assignments, due to accuracy loss of `P`. Call
/// site might compensate by calling `try_normalize()` on the returned `StakedAssignment` as a
/// post-precessing.
///
/// If an edge ratio is [`Bounded::min_value()`], it is dropped. This edge can never mean
/// anything useful.
pub fn into_staked(self, stake: ExtendedBalance) -> StakedAssignment<AccountId> {
let distribution = self
.distribution
.into_iter()
.filter_map(|(target, p)| {
// if this ratio is zero, then skip it.
if p.is_zero() {
None
} else {
// NOTE: this mul impl will always round to the nearest number, so we might both
// overflow and underflow.
let distribution_stake = p * stake;
Some((target, distribution_stake))
}
})
.collect::<Vec<(AccountId, ExtendedBalance)>>();
StakedAssignment {
who: self.who,
distribution,
}
}
/// Try and normalize this assignment.
///
/// If `Ok(())` is returned, then the assignment MUST have been successfully normalized to 100%.
///
/// ### Errors
///
/// This will return only if the internal `normalize` fails. This can happen if sum of
/// `self.distribution.map(|p| p.deconstruct())` fails to fit inside `UpperOf<P>`. A user of
/// this crate may statically assert that this can never happen and safely `expect` this to
/// return `Ok`.
pub fn try_normalize(&mut self) -> Result<(), &'static str> {
self.distribution
.iter()
.map(|(_, p)| *p)
.collect::<Vec<_>>()
.normalize(P::one())
.map(|normalized_ratios|
self.distribution
.iter_mut()
.zip(normalized_ratios)
.for_each(|((_, old), corrected)| { *old = corrected; })
)
}
}
/// A voter's stake assignment among a set of targets, represented as absolute values in the scale
/// of [`ExtendedBalance`].
#[derive(RuntimeDebug, Clone, Default)]
#[cfg_attr(feature = "std", derive(PartialEq, Eq, Encode, Decode))]
pub struct StakedAssignment<AccountId> {
/// Voter's identifier
pub who: AccountId,
/// The distribution of the voter's stake.
pub distribution: Vec<(AccountId, ExtendedBalance)>,
}
impl<AccountId> StakedAssignment<AccountId> {
/// Converts self into the normal [`Assignment`] type.
///
/// NOTE: This will always round down, and thus the results might be less than a full 100% `P`.
/// Use a normalization post-processing to fix this. The data type returned here will
/// potentially get used to create a compact type; a compact type requires sum of ratios to be
/// less than 100% upon un-compacting.
///
/// If an edge stake is so small that it cannot be represented in `T`, it is ignored. This edge
/// can never be re-created and does not mean anything useful anymore.
pub fn into_assignment<P: PerThing>(self) -> Assignment<AccountId, P>
where
AccountId: IdentifierT,
{
let stake = self.total();
let distribution = self.distribution
.into_iter()
.filter_map(|(target, w)| {
let per_thing = P::from_rational(w, stake);
if per_thing == Bounded::min_value() {
None
} else {
Some((target, per_thing))
}
})
.collect::<Vec<(AccountId, P)>>();
Assignment {
who: self.who,
distribution,
}
}
/// Try and normalize this assignment.
///
/// If `Ok(())` is returned, then the assignment MUST have been successfully normalized to
/// `stake`.
///
/// NOTE: current implementation of `.normalize` is almost safe to `expect()` upon. The only
/// error case is when the input cannot fit in `T`, or the sum of input cannot fit in `T`.
/// Sadly, both of these are dependent upon the implementation of `VoteLimit`, i.e. the limit of
/// edges per voter which is enforced from upstream. Hence, at this crate, we prefer returning a
/// result and a use the name prefix `try_`.
pub fn try_normalize(&mut self, stake: ExtendedBalance) -> Result<(), &'static str> {
self.distribution
.iter()
.map(|(_, ref weight)| *weight)
.collect::<Vec<_>>()
.normalize(stake)
.map(|normalized_weights|
self.distribution
.iter_mut()
.zip(normalized_weights.into_iter())
.for_each(|((_, weight), corrected)| { *weight = corrected; })
)
}
/// Get the total stake of this assignment (aka voter budget).
pub fn total(&self) -> ExtendedBalance {
self.distribution.iter().fold(Zero::zero(), |a, b| a.saturating_add(b.1))
}
}
/// A structure to demonstrate the election result from the perspective of the candidate, i.e. how
/// much support each candidate is receiving.
///
......
......@@ -17,9 +17,15 @@
//! Mock file for npos-elections.
#![cfg(test)]
#![cfg(any(test, mocks))]
use crate::*;
use std::{
collections::{HashSet, HashMap},
convert::TryInto,
hash::Hash,
};
use rand::{self, Rng, seq::SliceRandom};
use sp_arithmetic::{
traits::{One, SaturatedConversion, Zero},
PerThing,
......@@ -27,6 +33,24 @@ use sp_arithmetic::{
use sp_runtime::assert_eq_error_rate;
use sp_std::collections::btree_map::BTreeMap;
use crate::{Assignment, ElectionResult, ExtendedBalance, PerThing128, VoteWeight, seq_phragmen};
sp_npos_elections_compact::generate_solution_type!(
#[compact]
pub struct Compact::<VoterIndex = u32, TargetIndex = u16, Accuracy = Accuracy>(16)
);
pub type AccountId = u64;
/// The candidate mask allows easy disambiguation between voters and candidates: accounts
/// for which this bit is set are candidates, and without it, are voters.
pub const CANDIDATE_MASK: AccountId = 1 << ((std::mem::size_of::<AccountId>() * 8) - 1);
pub type CandidateId = AccountId;
pub type Accuracy = sp_runtime::Perbill;
pub type MockAssignment = crate::Assignment<AccountId, Accuracy>;
pub type Voter = (AccountId, VoteWeight, Vec<AccountId>);
#[derive(Default, Debug)]
pub(crate) struct _Candidate<A> {
who: A,
......@@ -60,8 +84,6 @@ pub(crate) struct _Support<A> {
pub(crate) type _Assignment<A> = (A, f64);
pub(crate) type _SupportMap<A> = BTreeMap<A, _Support<A>>;
pub(crate) type AccountId = u64;
#[derive(Debug, Clone)]
pub(crate) struct _ElectionResult<A: Clone> {
pub winners: Vec<(A, ExtendedBalance)>,
......@@ -72,14 +94,13 @@ pub(crate) fn auto_generate_self_voters<A: Clone>(candidates: &[A]) -> Vec<(A, V
candidates.iter().map(|c| (c.clone(), vec![c.clone()])).collect()
}
pub(crate) fn elect_float<A, FS>(
pub(crate) fn elect_float<A>(
candidate_count: usize,
initial_candidates: Vec<A>,
initial_voters: Vec<(A, Vec<A>)>,
stake_of: FS,
stake_of: impl Fn(&A) -> VoteWeight,
) -> Option<_ElectionResult<A>> where
A: Default + Ord + Copy,
for<'r> FS: Fn(&'r A) -> VoteWeight,
{
let mut elected_candidates: Vec<(A, ExtendedBalance)>;
let mut assigned: Vec<(A, Vec<_Assignment<A>>)>;
......@@ -299,16 +320,15 @@ pub(crate) fn do_equalize_float<A>(
pub(crate) fn create_stake_of(stakes: &[(AccountId, VoteWeight)])
-> Box<dyn Fn(&AccountId) -> VoteWeight>
-> impl Fn(&AccountId) -> VoteWeight
{
let mut storage = BTreeMap::<AccountId, VoteWeight>::new();
stakes.iter().for_each(|s| { storage.insert(s.0, s.1); });
let stake_of = move |who: &AccountId| -> VoteWeight { storage.get(who).unwrap().to_owned() };
Box::new(stake_of)
move |who: &AccountId| -> VoteWeight { storage.get(who).unwrap().to_owned() }
}
pub fn check_assignments_sum<T: PerThing>(assignments: Vec<Assignment<AccountId, T>>) {
pub fn check_assignments_sum<T: PerThing>(assignments: &[Assignment<AccountId, T>]) {
for Assignment { distribution, .. } in assignments {
let mut sum: u128 = Zero::zero();
distribution.iter().for_each(|(_, p)| sum += p.deconstruct().saturated_into::<u128>());
......@@ -316,12 +336,16 @@ pub fn check_assignments_sum<T: PerThing>(assignments: Vec<Assignment<AccountId,
}
}
pub(crate) fn run_and_compare<Output: PerThing128>(
pub(crate) fn run_and_compare<Output: PerThing128, FS>(
candidates: Vec<AccountId>,
voters: Vec<(AccountId, Vec<AccountId>)>,
stake_of: &Box<dyn Fn(&AccountId) -> VoteWeight>,
stake_of: FS,
to_elect: usize,
) {
)
where
Output: PerThing128,
FS: Fn(&AccountId) -> VoteWeight,
{
// run fixed point code.
let ElectionResult { winners, assignments } = seq_phragmen::<_, Output>(
to_elect,
......@@ -340,10 +364,10 @@ pub(crate) fn run_and_compare<Output: PerThing128>(
assert_eq!(winners.iter().map(|(x, _)| x).collect::<Vec<_>>(), truth_value.winners.iter().map(|(x, _)| x).collect::<Vec<_>>());
for Assignment { who, distribution } in assignments.clone() {
if let Some(float_assignments) = truth_value.assignments.iter().find(|x| x.0 == who) {
for Assignment { who, distribution } in assignments.iter() {
if let Some(float_assignments) = truth_value.assignments.iter().find(|x| x.0 == *who) {
for (candidate, per_thingy) in distribution {
if let Some(float_assignment) = float_assignments.1.iter().find(|x| x.0 == candidate ) {
if let Some(float_assignment) = float_assignments.1.iter().find(|x| x.0 == *candidate ) {
assert_eq_error_rate!(
Output::from_float(float_assignment.1).deconstruct(),
per_thingy.deconstruct(),
......@@ -362,15 +386,13 @@ pub(crate) fn run_and_compare<Output: PerThing128>(
}
}
check_assignments_sum(assignments);
check_assignments_sum(&assignments);
}
pub(crate) fn build_support_map_float<FS>(
pub(crate) fn build_support_map_float(
result: &mut _ElectionResult<AccountId>,
stake_of: FS,
) -> _SupportMap<AccountId>
where for<'r> FS: Fn(&'r AccountId) -> VoteWeight
{
stake_of: impl Fn(&AccountId) -> VoteWeight,
) -> _SupportMap<AccountId> {
let mut supports = <_SupportMap<AccountId>>::new();
result.winners
.iter()
......@@ -393,3 +415,124 @@ pub(crate) fn build_support_map_float<FS>(
}
supports
}
/// Generate voter and assignment lists. Makes no attempt to be realistic about winner or assignment fairness.
///
/// Maintains these invariants:
///
/// - candidate ids have `CANDIDATE_MASK` bit set
/// - voter ids do not have `CANDIDATE_MASK` bit set
/// - assignments have the same ordering as voters
/// - `assignments.distribution.iter().map(|(_, frac)| frac).sum() == One::one()`
/// - a coherent set of winners is chosen.
/// - the winner set is a subset of the candidate set.
/// - `assignments.distribution.iter().all(|(who, _)| winners.contains(who))`
pub fn generate_random_votes(
candidate_count: usize,
voter_count: usize,
mut rng: impl Rng,
) -> (Vec<Voter>, Vec<MockAssignment>, Vec<CandidateId>) {
// cache for fast generation of unique candidate and voter ids
let mut used_ids = HashSet::with_capacity(candidate_count + voter_count);
// candidates are easy: just a completely random set of IDs
let mut candidates: Vec<AccountId> = Vec::with_capacity(candidate_count);
while candidates.len() < candidate_count {
let mut new = || rng.gen::<AccountId>() | CANDIDATE_MASK;
let mut id = new();
// insert returns `false` when the value was already present
while !used_ids.insert(id) {
id = new();
}
candidates.push(id);
}
// voters are random ids, random weights, random selection from the candidates
let mut voters = Vec::with_capacity(voter_count);
while voters.len() < voter_count {
let mut new = || rng.gen::<AccountId>() & !CANDIDATE_MASK;
let mut id = new();
// insert returns `false` when the value was already present
while !used_ids.insert(id) {
id = new();
}
let vote_weight = rng.gen();
// it's not interesting if a voter chooses 0 or all candidates, so rule those cases out.
// also, let's not generate any cases which result in a compact overflow.
let n_candidates_chosen = rng.gen_range(1, candidates.len().min(16));
let mut chosen_candidates = Vec::with_capacity(n_candidates_chosen);
chosen_candidates.extend(candidates.choose_multiple(&mut rng, n_candidates_chosen));
voters.push((id, vote_weight, chosen_candidates));
}
// always generate a sensible number of winners: elections are uninteresting if nobody wins,
// or everybody wins
let num_winners = rng.gen_range(1, candidate_count);
let mut winners: HashSet<AccountId> = HashSet::with_capacity(num_winners);
winners.extend(candidates.choose_multiple(&mut rng, num_winners));
assert_eq!(winners.len(), num_winners);
let mut assignments = Vec::with_capacity(voters.len());
for (voter_id, _, votes) in voters.iter() {
let chosen_winners = votes.iter().filter(|vote| winners.contains(vote)).cloned();
let num_chosen_winners = chosen_winners.clone().count();
// distribute the available stake randomly
let stake_distribution = if num_chosen_winners == 0 {
Vec::new()
} else {
let mut available_stake = 1000;
let mut stake_distribution = Vec::with_capacity(num_chosen_winners);
for _ in 0..num_chosen_winners - 1 {
let stake = rng.gen_range(0, available_stake);
stake_distribution.push(Accuracy::from_perthousand(stake));
available_stake -= stake;
}
stake_distribution.push(Accuracy::from_perthousand(available_stake));
stake_distribution.shuffle(&mut rng);
stake_distribution
};
assignments.push(MockAssignment {
who: *voter_id,
distribution: chosen_winners.zip(stake_distribution).collect(),
});
}
(voters, assignments, candidates)
}
fn generate_cache<Voters, Item>(voters: Voters) -> HashMap<Item, usize>
where
Voters: Iterator<Item = Item>,
Item: Hash + Eq + Copy,
{
let mut cache = HashMap::new();
for (idx, voter_id) in voters.enumerate() {
cache.insert(voter_id, idx);
}
cache
}
/// Create a function that returns the index of a voter in the voters list.
pub fn make_voter_fn<VoterIndex>(voters: &[Voter]) -> impl Fn(&AccountId) -> Option<VoterIndex>
where
usize: TryInto<VoterIndex>,
{
let cache = generate_cache(voters.iter().map(|(id, _, _)| *id));
move |who| cache.get(who).cloned().and_then(|i| i.try_into().ok())
}
/// Create a function that returns the index of a candidate in the candidates list.
pub fn make_target_fn<TargetIndex>(
candidates: &[CandidateId],
) -> impl Fn(&CandidateId) -> Option<TargetIndex>
where
usize: TryInto<TargetIndex>,
{
let cache = generate_cache(candidates.iter().cloned());
move |who| cache.get(who).cloned().and_then(|i| i.try_into().ok())
}
......@@ -19,11 +19,13 @@
use crate::{
balancing, helpers::*, is_score_better, mock::*, seq_phragmen, seq_phragmen_core, setup_inputs,
to_support_map, to_supports, Assignment, ElectionResult, ExtendedBalance, StakedAssignment,
Support, Voter, EvaluateSupport,
to_support_map, to_supports, Assignment, CompactSolution, ElectionResult, ExtendedBalance,
IndexAssignment, StakedAssignment, Support, Voter, EvaluateSupport,
};
use rand::{self, SeedableRng};
use sp_arithmetic::{PerU16, Perbill, Percent, Permill};
use substrate_test_utils::assert_eq_uvec;
use std::convert::TryInto;
#[test]
fn float_phragmen_poc_works() {
......@@ -423,10 +425,10 @@ fn phragmen_poc_2_works() {
(4, 500),
]);
run_and_compare::<Perbill>(candidates.clone(), voters.clone(), &stake_of, 2);
run_and_compare::<Permill>(candidates.clone(), voters.clone(), &stake_of, 2);
run_and_compare::<Percent>(candidates.clone(), voters.clone(), &stake_of, 2);
run_and_compare::<PerU16>(candidates, voters, &stake_of, 2);
run_and_compare::<Perbill, _>(candidates.clone(), voters.clone(), &stake_of, 2);
run_and_compare::<Permill, _>(candidates.clone(), voters.clone(), &stake_of, 2);
run_and_compare::<Percent, _>(candidates.clone(), voters.clone(), &stake_of, 2);
run_and_compare::<PerU16, _>(candidates, voters, &stake_of, 2);
}
#[test]
......@@ -444,10 +446,10 @@ fn phragmen_poc_3_works() {
(4, 1000),
]);
run_and_compare::<Perbill>(candidates.clone(), voters.clone(), &stake_of, 2);
run_and_compare::<Permill>(candidates.clone(), voters.clone(), &stake_of, 2);
run_and_compare::<Percent>(candidates.clone(), voters.clone(), &stake_of, 2);
run_and_compare::<PerU16>(candidates, voters, &stake_of, 2);
run_and_compare::<Perbill, _>(candidates.clone(), voters.clone(), &stake_of, 2);
run_and_compare::<Permill, _>(candidates.clone(), voters.clone(), &stake_of, 2);
run_and_compare::<Percent, _>(candidates.clone(), voters.clone(), &stake_of, 2);
run_and_compare::<PerU16, _>(candidates, voters, &stake_of, 2);
}
#[test]
......@@ -475,7 +477,7 @@ fn phragmen_accuracy_on_large_scale_only_candidates() {
assert_eq_uvec!(winners, vec![(1, 18446744073709551614u128), (5, 18446744073709551613u128)]);
assert_eq!(assignments.len(), 2);
check_assignments_sum(assignments);
check_assignments_sum(&assignments);
}
#[test]
......@@ -527,7 +529,7 @@ fn phragmen_accuracy_on_large_scale_voters_and_candidates() {
]
);
check_assignments_sum(assignments);
check_assignments_sum(&assignments);
}
#[test]
......@@ -549,7 +551,7 @@ fn phragmen_accuracy_on_small_scale_self_vote() {
).unwrap();
assert_eq_uvec!(winners, vec![(20, 2), (10, 1), (30, 1)]);
check_assignments_sum(assignments);
check_assignments_sum(&assignments);
}
#[test]
......@@ -580,7 +582,7 @@ fn phragmen_accuracy_on_small_scale_no_self_vote() {
).unwrap();
assert_eq_uvec!(winners, vec![(20, 2), (10, 1), (30, 1)]);
check_assignments_sum(assignments);
check_assignments_sum(&assignments);
}
......@@ -615,7 +617,7 @@ fn phragmen_large_scale_test() {
).unwrap();
assert_eq_uvec!(to_without_backing(winners.clone()), vec![24, 22]);
check_assignments_sum(assignments);
check_assignments_sum(&assignments);
}
#[test]
......@@ -663,7 +665,7 @@ fn phragmen_large_scale_test_2() {
],
);
check_assignments_sum(assignments);
check_assignments_sum(&assignments);
}
#[test]
......@@ -696,7 +698,7 @@ fn phragmen_linear_equalize() {
(130, 1000),
]);
run_and_compare::<Perbill>(candidates, voters, &stake_of, 2);
run_and_compare::<Perbill, _>(candidates, voters, &stake_of, 2);
}
#[test]
......@@ -1355,7 +1357,7 @@ mod solution_type {
};
let compacted = TestSolutionCompact::from_assignment(
assignments.clone(),
&assignments,
voter_index,
target_index,
).unwrap();
......@@ -1518,7 +1520,7 @@ mod solution_type {
];
let compacted = TestSolutionCompact::from_assignment(
assignments.clone(),
&assignments,
voter_index,
target_index,
);
......@@ -1549,7 +1551,7 @@ mod solution_type {
};
let compacted = TestSolutionCompact::from_assignment(
assignments.clone(),
&assignments,
voter_index,
target_index,
).unwrap();
......@@ -1564,3 +1566,24 @@ mod solution_type {
);
}
}
#[test]
fn index_assignments_generate_same_compact_as_plain_assignments() {
let rng = rand::rngs::SmallRng::seed_from_u64(0);
let (voters, assignments, candidates) = generate_random_votes(1000, 2500, rng);
let voter_index = make_voter_fn(&voters);
let target_index = make_target_fn(&candidates);
let compact = Compact::from_assignment(&assignments, &voter_index, &target_index).unwrap();
let index_assignments = assignments
.into_iter()
.map(|assignment| IndexAssignment::new(&assignment, &voter_index, &target_index))
.collect::<Result<Vec<_>, _>>()
.unwrap();
let index_compact = index_assignments.as_slice().try_into().unwrap();
assert_eq!(compact, index_compact);
}
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment