Skip to content
  1. May 03, 2021
    • Peter Goodspeed-Niklaus's avatar
      Refactor election solution trimming for efficiency (#8614) · c786fb21
      Peter Goodspeed-Niklaus authored
      * Refactor election solution trimming for efficiency
      
      The previous version always trimmed the `CompactOf<T>` instance,
      which was intrinsically inefficient: that's a packed data structure,
      which is naturally expensive to edit. It's much easier to edit
      the unpacked data structures: the `voters` and `assignments` lists.
      
      * rework length-trim tests to work with the new interface
      
      Test suite now compiles. Tests still don't pass because the macro
      generating the compact structure still generates `unimplemented!()`
      for the actual `compact_length_of` implementation.
      
      * simplify
      
      * add a fuzzer which can validate `Compact::encoded_size_for`
      
      The `Compact` solution type is generated distinctly for each runtime,
      and has both three type parameters and a built-in limit to the number
      of candidates that each voter can vote for. Finally, they have an
      optional `#[compact]` attribute which changes the encoding behavior.
      
      The assignment truncation algorithm we're using depends on the ability
      to efficiently and accurately determine how much space a `Compact`
      solution will take once encoded.
      
      Together, these two facts imply that simple unit tests are not
      sufficient to validate the behavior of `Compact::encoded_size_for`.
      This commit adds such a fuzzer. It is designed such that it is possible
      to add a new fuzzer to the family by simply adjusting the
      `generate_solution_type` macro invocation as desired, and making a
      few minor documentation edits.
      
      Of course, the fuzzer still fails for now: the generated implementation
      for `encoded_size_for` is still `unimplemented!()`. However, once
      the macro is updated appropriately, this fuzzer family should allow
      us to gain confidence in the correctness of the generated code.
      
      * Revert "add a fuzzer which can validate `Compact::encoded_size_for`"
      
      This reverts commit 916038790887e64217c6a46e9a6d281386762bfb.
      
      The design of `Compact::encoded_size_for` is flawed. When `#[compact]`
      mode is enabled, every integer in the dataset is encoded using run-
      length encoding. This means that it is impossible to compute the final
      length faster than actually encoding the data structure, because the
      encoded length of every field varies with the actual value stored.
      
      Given that we won't be adding that method to the trait, we won't be
      needing a fuzzer to validate its performance.
      
      * revert changes to `trait CompactSolution`
      
      If `CompactSolution::encoded_size_for` can't be implemented in the
      way that we wanted, there's no point in adding it.
      
      * WIP: restructure trim_assignments_length by actually encoding
      
      This is not as efficient as what we'd hoped for, but it should still
      be better than what it's replacing. Overall efficiency of
      `fn trim_assignments_length` is now `O(edges * lg assignments.len())`.
      
      * fix compiler errors
      
      * don't sort voters, just assignments
      
      Sorting the `voters` list causes lots of problems; an invariant that
      we need to maintain is that an index into the voters list has a stable
      meaning.
      
      Luckily, it turns out that there is no need for the assignments list
      to correspond to the voters list. That isn't an invariant, though previously
      I'd thought that it was.
      
      This simplifies things; we can just leave the voters list alone,
      and sort the assignments list the way that is convenient.
      
      * WIP: add `IndexAssignment` type to speed up repeatedly creating `Compact`
      
      Next up: `impl<'a, T> From<&'a [IndexAssignmentOf<T>]> for Compact`,
      in the proc-macro which makes `Compact`. Should be a pretty straightforward
      adaptation of `from_assignment`.
      
      * Add IndexAssignment and conversion method to CompactSolution
      
      This involves a bit of duplication of types from
      `election-provider-multi-phase`; we'll clean those up shortly.
      
      I'm not entirely happy that we had to add a `from_index_assignments`
      method to `CompactSolution`, but we couldn't define
      `trait CompactSolution: TryFrom<&'a [Self::IndexAssignment]` because
      that made trait lookup recursive, and I didn't want to propagate
      `CompactSolutionOf<T> + TryFrom<&[IndexAssignmentOf<T>]>` everywhere
      that compact solutions are specified.
      
      * use `CompactSolution::from_index_assignment` and clean up dead code
      
      * get rid of `from_index_assignments` in favor of `TryFrom`
      
      * cause `pallet-election-provider-multi-phase` tests to compile successfully
      
      Mostly that's just updating the various test functions to keep track of
      refactorings elsewhere, though in a few places we needed to refactor some
      test-only helpers as well.
      
      * fix infinite binary search loop
      
      Turns out that moving `low` and `high` into an averager function is a
      bad idea, because the averager gets copies of those values, which
      of course are never updated. Can't use mutable references, because
      we want to read them elsewhere in the code. Just compute the average
      directly; life is better that way.
      
      * fix a test failure
      
      * fix the rest of test failures
      
      * remove unguarded subtraction
      
      * fix npos-elections tests compilation
      
      * ensure we use sp_std::vec::Vec in assignments
      
      * add IndexAssignmentOf to sp_npos_elections
      
      * move miner types to `unsigned`
      
      * use stable sort
      
      * rewrap some long comments
      
      * use existing cache instead of building a dedicated stake map
      
      * generalize the TryFrom bound on CompactSolution
      
      * undo adding sp-core dependency
      
      * consume assignments to produce index_assignments
      
      * Add a test of Assignment -> IndexAssignment -> Compact
      
      * fix `IndexAssignmentOf` doc
      
      * move compact test from sp-npos-elections-compact to sp-npos-elections
      
      This means that we can put the mocking parts of that into a proper
      mock package, put the test into a test package among other tests.
      
      Having the mocking parts in a mock package enables us to create a
      benchmark (which is treated as a separate crate) import them.
      
      * rename assignments -> sorted_assignments
      
      * sort after reducing to avoid potential re-sort issues
      
      * add runtime benchmark, fix critical binary search error
      
      "Why don't you add a benchmark?", he said. "It'll be good practice,
      and can help demonstrate that this isn't blowing up the runtime."
      
      He was absolutely right.
      
      The biggest discovery is that adding a parametric benchmark means that
      you get a bunch of new test cases, for free. This is excellent, because
      those test cases uncovered a binary search bug. Fixing that simplified
      that part of the code nicely.
      
      The other nice thing you get from a parametric benchmark is data about
      what each parameter does. In this case, `f` is the size factor: what
      percent of the votes (by size) should be removed. 0 means that we should
      keep everything, 95 means that we should trim down to 5% of original size
      or less.
      
      ```
      Median Slopes Analysis
      ========
      -- Extrinsic Time --
      
      Model:
      Time ~=     3846
          + v    0.015
          + t        0
          + a    0.192
          + d        0
          + f        0
                    µs
      
      Min Squares Analysis
      ========
      -- Extrinsic Time --
      
      Data points distribution:
          v     t     a     d     f   mean µs  sigma µs       %
      <snip>
       6000  1600  3000   800     0      4385     75.87    1.7%
       6000  1600  3000   800     9      4089     46.28    1.1%
       6000  1600  3000   800    18      3793     36.45    0.9%
       6000  1600  3000   800    27      3365     41.13    1.2%
       6000  1600  3000   800    36      3096     7.498    0.2%
       6000  1600  3000   800    45      2774     17.96    0.6%
       6000  1600  3000   800    54      2057     37.94    1.8%
       6000  1600  3000   800    63      1885     2.515    0.1%
       6000  1600  3000   800    72      1591     3.203    0.2%
       6000  1600  3000   800    81      1219     25.72    2.1%
       6000  1600  3000   800    90       859     5.295    0.6%
       6000  1600  3000   800    95     684.6     2.969    0.4%
      
      Quality and confidence:
      param     error
      v         0.008
      t         0.029
      a         0.008
      d         0.044
      f         0.185
      
      Model:
      Time ~=     3957
          + v    0.009
          + t        0
          + a    0.185
          + d        0
          + f        0
                    µs
      ```
      
      What's nice about this is the clear negative correlation between
      amount removed and total time. The more we remove, the less total
      time things take.
      c786fb21
    • ferrell-code's avatar
      Upgrade authorship pallet to Frame-v2 (#8663) · e0f85464
      ferrell-code authored
      
      
      * first commit
      
      * get to compile
      
      * fix deprecated grandpa
      
      * formatting
      
      * module to pallet
      
      * add authorship pallet to mocks
      
      * Fix upgrade of storage.
      
      Co-authored-by: default avatarXiliang Chen <[email protected]>
      
      * trigger CI
      
      * put back doc
      
      Co-authored-by: default avatarGuillaume Thiolliere <[email protected]>
      Co-authored-by: default avatarXiliang Chen <[email protected]>
      e0f85464
  2. May 02, 2021
  3. Apr 28, 2021
  4. Apr 27, 2021
  5. Apr 26, 2021
    • Shawn Tabrizi's avatar
      Add BoundedVec to Treasury Pallet (#8665) · 4225d508
      Shawn Tabrizi authored
      
      
      * bounded treasury approvals
      
      * update benchmarks
      
      * update configs
      
      * cargo run --release --features=runtime-benchmarks --manifest-path=bin/node/cli/Cargo.toml -- benchmark --chain=dev --steps=50 --repeat=20 --pallet=pallet_treasury --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --output=./frame/treasury/src/weights.rs --template=./.maintain/frame-weight-template.hbs
      
      * fix weight param
      
      Co-authored-by: default avatarParity Benchmarking Bot <[email protected]>
      4225d508
  6. Apr 23, 2021
  7. Apr 22, 2021
  8. Apr 19, 2021
  9. Apr 18, 2021
  10. Apr 17, 2021
  11. Apr 16, 2021
  12. Apr 13, 2021
  13. Apr 12, 2021
  14. Apr 10, 2021
  15. Apr 09, 2021
  16. Apr 08, 2021
  17. Apr 07, 2021
  18. Apr 06, 2021