Skip to content
  • eskimor's avatar
    Fix algorithmic complexity of on-demand scheduler with regards to number of cores. (#3190) · b74353d3
    eskimor authored
    
    
    We witnessed really poor performance on Rococo, where we ended up with
    50 on-demand cores. This was due to the fact that for each core the full
    queue was processed. With this change full queue processing will happen
    way less often (most of the time complexity is O(1) or O(log(n))) and if
    it happens then only for one core (in expectation).
    
    Also spot price is now updated before each order to ensure economic back
    pressure.
    
    
    TODO:
    
    - [x] Implement
    - [x] Basic tests
    - [x] Add more tests (see todos)
    - [x] Run benchmark to confirm better performance, first results suggest
    > 100x faster.
    - [x] Write migrations
    - [x] Bump scale-info version and remove patch in Cargo.toml
    - [x] Write PR docs: on-demand performance improved, more on-demand
    cores are now non problematic anymore. If need by also the max queue
    size can be increased again. (Maybe not to 10k)
    
    Optional: Performance can be improved even more, if we called
    `pop_assignment_for_core()`, before calling `report_processed` (Avoid
    needless affinity drops). The effect gets smaller the larger the claim
    queue and I would only go for it, if it does not add complexity to the
    scheduler.
    
    ---------
    
    Co-authored-by: default avatareskimor <[email protected]>
    Co-authored-by: default avatarantonva <[email protected]>
    Co-authored-by: command-bot <>
    Co-authored-by: default avatarAnton Vilhelm Ásgeirsson <[email protected]>
    Co-authored-by: default avatarordian <[email protected]>
    b74353d3