Unverified Commit b74353d3 authored by eskimor's avatar eskimor Committed by GitHub
Browse files

Fix algorithmic complexity of on-demand scheduler with regards to number of cores. (#3190)



We witnessed really poor performance on Rococo, where we ended up with
50 on-demand cores. This was due to the fact that for each core the full
queue was processed. With this change full queue processing will happen
way less often (most of the time complexity is O(1) or O(log(n))) and if
it happens then only for one core (in expectation).

Also spot price is now updated before each order to ensure economic back
pressure.


TODO:

- [x] Implement
- [x] Basic tests
- [x] Add more tests (see todos)
- [x] Run benchmark to confirm better performance, first results suggest
> 100x faster.
- [x] Write migrations
- [x] Bump scale-info version and remove patch in Cargo.toml
- [x] Write PR docs: on-demand performance improved, more on-demand
cores are now non problematic anymore. If need by also the max queue
size can be increased again. (Maybe not to 10k)

Optional: Performance can be improved even more, if we called
`pop_assignment_for_core()`, before calling `report_processed` (Avoid
needless affinity drops). The effect gets smaller the larger the claim
queue and I would only go for it, if it does not add complexity to the
scheduler.

---------

Co-authored-by: default avatareskimor <[email protected]>
Co-authored-by: default avatarantonva <[email protected]>
Co-authored-by: command-bot <>
Co-authored-by: default avatarAnton Vilhelm Ásgeirsson <[email protected]>
Co-authored-by: default avatarordian <[email protected]>
parent b686bfef
Pipeline #457078 failed with stages
in 50 minutes and 5 seconds
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment