Unverified Commit 8a6af441 authored by Denis_P's avatar Denis_P 🏑 Committed by GitHub
Browse files

WIP: CI: add spellcheck (#3421)



* CI: add spellcheck

* revert me

* CI: explicit command for spellchecker

* spellcheck: edit misspells

* CI: run spellcheck on diff

* spellcheck: edits

* spellcheck: edit misspells

* spellcheck: add rules

* spellcheck: mv configs

* spellcheck: more edits

* spellcheck: chore

* spellcheck: one more thing

* spellcheck: and another one

* spellcheck: seems like it doesn't get to an end

* spellcheck: new words after rebase

* spellcheck: new words appearing out of nowhere

* chore

* review edits

* more review edits

* more edits

* wonky behavior

* wonky behavior 2

* wonky behavior 3

* change git behavior

* spellcheck: another bunch of new edits

* spellcheck: new words are koming out of nowhere

* CI: finding the master

* CI: fetching master implicitly

* CI: undebug

* new errors

* a bunch of new edits

* and some more

* Update node/core/approval-voting/src/approval_db/v1/mod.rs
Co-authored-by: Andronik Ordian's avatarAndronik Ordian <write@reusable.software>

* Update xcm/xcm-executor/src/assets.rs
Co-authored-by: Andronik Ordian's avatarAndronik Ordian <write@reusable.software>

* Apply suggestions from code review
Co-authored-by: Andronik Ordian's avatarAndronik Ordian <write@reusable.software>

* Suggestions from the code review

* CI: scan only changed files
Co-authored-by: Andronik Ordian's avatarAndronik Ordian <write@reusable.software>
parent 43920cd7
Pipeline #147422 canceled with stages
in 7 minutes and 46 seconds
......@@ -152,7 +152,7 @@ pub(crate) struct Config {
n_cores: u32,
/// The zeroth delay tranche width.
zeroth_delay_tranche_width: u32,
/// The number of samples we do of relay_vrf_modulo.
/// The number of samples we do of `relay_vrf_modulo`.
relay_vrf_modulo_samples: u32,
/// The number of delay tranches in total.
n_delay_tranches: u32,
......
......@@ -121,7 +121,7 @@ enum Mode {
/// The approval voting subsystem.
pub struct ApprovalVotingSubsystem {
/// LocalKeystore is needed for assignment keys, but not necessarily approval keys.
/// `LocalKeystore` is needed for assignment keys, but not necessarily approval keys.
///
/// We do a lot of VRF signing and need the keys to have low latency.
keystore: Arc<LocalKeystore>,
......@@ -145,7 +145,7 @@ struct MetricsInner {
time_recover_and_approve: prometheus::Histogram,
}
/// Aproval Voting metrics.
/// Approval Voting metrics.
#[derive(Default, Clone)]
pub struct Metrics(Option<MetricsInner>);
......
......@@ -24,7 +24,7 @@ use std::pin::Pin;
const TICK_DURATION_MILLIS: u64 = 500;
/// A base unit of time, starting from the unix epoch, split into half-second intervals.
/// A base unit of time, starting from the Unix epoch, split into half-second intervals.
pub(crate) type Tick = u64;
/// A clock which allows querying of the current tick as well as
......
......@@ -285,7 +285,7 @@ fn runtime_api_error_does_not_stop_the_subsystem() {
}
);
// runtime api call fails
// runtime API call fails
assert_matches!(
overseer_recv(&mut virtual_overseer).await,
AllMessages::RuntimeApi(RuntimeApiMessage::Request(
......
......@@ -104,7 +104,7 @@ pub enum Error {
/// PoV data to validate.
enum PoVData {
/// Allready available (from candidate selection).
/// Already available (from candidate selection).
Ready(Arc<PoV>),
/// Needs to be fetched from validator (we are checking a signed statement).
FetchFromValidator {
......@@ -856,7 +856,7 @@ impl CandidateBackingJob {
/// This also does bounds-checking on the validator index and will return an error if the
/// validator index is out of bounds for the current validator set. It's expected that
/// this should never happen due to the interface of the candidate backing subsystem -
/// the networking component repsonsible for feeding statements to the backing subsystem
/// the networking component responsible for feeding statements to the backing subsystem
/// is meant to check the signature and provenance of all statements before submission.
async fn dispatch_new_statement_to_dispute_coordinator(
&self,
......
......@@ -312,5 +312,5 @@ impl JobTrait for BitfieldSigningJob {
}
}
/// BitfieldSigningSubsystem manages a number of bitfield signing jobs.
/// `BitfieldSigningSubsystem` manages a number of bitfield signing jobs.
pub type BitfieldSigningSubsystem<Spawner> = JobSubsystem<BitfieldSigningJob, Spawner>;
......@@ -45,7 +45,7 @@ pub(super) trait Backend {
fn load_leaves(&self) -> Result<LeafEntrySet, Error>;
/// Load the stagnant list at the given timestamp.
fn load_stagnant_at(&self, timestamp: Timestamp) -> Result<Vec<Hash>, Error>;
/// Load all stagnant lists up to and including the given UNIX timestamp
/// Load all stagnant lists up to and including the given Unix timestamp
/// in ascending order.
fn load_stagnant_at_up_to(&self, up_to: Timestamp)
-> Result<Vec<(Timestamp, Vec<Hash>)>, Error>;
......
......@@ -26,7 +26,7 @@
//! ```
//!
//! The big-endian encoding is used for creating iterators over the key-value DB which are
//! accessible by prefix, to find the earlist block number stored as well as the all stagnant
//! accessible by prefix, to find the earliest block number stored as well as the all stagnant
//! blocks.
//!
//! The `Vec`s stored are always non-empty. Empty `Vec`s are not stored on disk so there is no
......
......@@ -534,7 +534,7 @@ async fn handle_active_leaf(
);
// If we don't know the weight, we can't import the block.
// And none of its descendents either.
// And none of its descendants either.
break;
}
Some(w) => w,
......
......@@ -57,7 +57,7 @@ pub trait Backend {
where I: IntoIterator<Item = BackendWriteOp>;
}
/// An in-memory overllay for the backend.
/// An in-memory overlay for the backend.
///
/// This maintains read-only access to the underlying backend, but can be converted into a set of
/// write operations which will, when written to the underlying backend, give the same view as the
......@@ -121,7 +121,7 @@ impl<'a, B: 'a + Backend> OverlayedBackend<'a, B> {
self.inner.load_candidate_votes(session, candidate_hash)
}
/// Prepare a write to the 'earliest session' field of the DB.
/// Prepare a write to the "earliest session" field of the DB.
///
/// Later calls to this function will override earlier ones.
pub fn write_earliest_session(&mut self, session: SessionIndex) {
......
......@@ -14,7 +14,7 @@
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
//! V1 database for the dispute coordinator.
//! `V1` database for the dispute coordinator.
use polkadot_primitives::v1::{
CandidateReceipt, ValidDisputeStatementKind, InvalidDisputeStatementKind, ValidatorIndex,
......
......@@ -212,7 +212,7 @@ pub enum DisputeStatus {
/// since the given timestamp.
#[codec(index = 1)]
ConcludedFor(Timestamp),
/// The dispute has been concluded agains the candidate
/// The dispute has been concluded against the candidate
/// since the given timestamp.
///
/// This takes precedence over `ConcludedFor` in the case that
......
......@@ -144,7 +144,7 @@ async fn fetch_validation_code(virtual_overseer: &mut VirtualOverseer) {
)) => {
tx.send(Ok(Some(validation_code))).unwrap();
},
"overseer did not receive runtime api request for validation code",
"overseer did not receive runtime API request for validation code",
);
}
......@@ -243,7 +243,7 @@ fn cannot_participate_if_cannot_recover_validation_code() {
)) => {
tx.send(Ok(None)).unwrap();
},
"overseer did not receive runtime api request for validation code",
"overseer did not receive runtime API request for validation code",
);
virtual_overseer
......
......@@ -40,13 +40,13 @@ pub enum InvalidCandidate {
///
/// (b) The candidate triggered a code path that has lead to the process death. For example,
/// the PVF found a way to consume unbounded amount of resources and then it either exceeded
/// an rlimit (if set) or, again, invited OOM killer. Another possibility is a bug in
/// an `rlimit` (if set) or, again, invited OOM killer. Another possibility is a bug in
/// wasmtime allowed the PVF to gain control over the execution worker.
///
/// We attribute such an event to an invalid candidate in either case.
///
/// The rationale for this is that a glitch may lead to unfair rejecting candidate by a single
/// validator. If the glitch is somewhat more persistant the validator will reject all candidate
/// validator. If the glitch is somewhat more persistent the validator will reject all candidate
/// thrown at it and hopefully the operator notices it by decreased reward performance of the
/// validator. On the other hand, if the worker died because of (b) we would have better chances
/// to stop the attack.
......
......@@ -185,7 +185,7 @@ impl Response {
}
}
/// The entrypoint that the spawned execute worker should start with. The socket_path specifies
/// The entrypoint that the spawned execute worker should start with. The `socket_path` specifies
/// the path to the socket used to communicate with the host.
pub fn worker_entrypoint(socket_path: &str) {
worker_event_loop("execute", socket_path, |mut stream| async move {
......
......@@ -54,7 +54,7 @@ const CONFIG: Config = Config {
},
};
/// Runs the prevaldation on the given code. Returns a [`RuntimeBlob`] if it succeeds.
/// Runs the prevalidation on the given code. Returns a [`RuntimeBlob`] if it succeeds.
pub fn prevalidate(code: &[u8]) -> Result<RuntimeBlob, sc_executor_common::error::WasmError> {
let blob = RuntimeBlob::new(code)?;
// It's assumed this function will take care of any prevalidation logic
......
......@@ -49,7 +49,7 @@ pub struct ValidationHost {
}
impl ValidationHost {
/// Execute PVF with the given code, params and priority. The result of execution will be sent
/// Execute PVF with the given code, parameters and priority. The result of execution will be sent
/// to the provided result sender.
///
/// This is async to accommodate the fact a possibility of back-pressure. In the vast majority of
......@@ -106,7 +106,7 @@ pub struct Config {
pub cache_path: PathBuf,
/// The path to the program that can be used to spawn the prepare workers.
pub prepare_worker_program_path: PathBuf,
/// The time alloted for a prepare worker to spawn and report to the host.
/// The time allotted for a prepare worker to spawn and report to the host.
pub prepare_worker_spawn_timeout: Duration,
/// The maximum number of workers that can be spawned in the prepare pool for tasks with the
/// priority below critical.
......@@ -115,7 +115,7 @@ pub struct Config {
pub prepare_workers_hard_max_num: usize,
/// The path to the program that can be used to spawn the execute workers.
pub execute_worker_program_path: PathBuf,
/// The time alloted for an execute worker to spawn and report to the host.
/// The time allotted for an execute worker to spawn and report to the host.
pub execute_worker_spawn_timeout: Duration,
/// The maximum number of execute workers that can run at the same time.
pub execute_workers_max_num: usize,
......@@ -147,7 +147,7 @@ impl Config {
/// must be polled in order for validation host to function.
///
/// The future should not return normally but if it does then that indicates an unrecoverable error.
/// In that case all pending requests will be cancelled, dropping the result senders and new ones
/// In that case all pending requests will be canceled, dropping the result senders and new ones
/// will be rejected.
pub fn start(config: Config) -> (ValidationHost, impl Future<Output = ()>) {
let (to_host_tx, to_host_rx) = mpsc::channel(10);
......@@ -220,7 +220,7 @@ struct PendingExecutionRequest {
}
/// A mapping from an artifact ID which is in preparation state to the list of pending execution
/// requests that should be executed once the artifact's prepration is finished.
/// requests that should be executed once the artifact's preparation is finished.
#[derive(Default)]
struct AwaitingPrepare(HashMap<ArtifactId, Vec<PendingExecutionRequest>>);
......@@ -628,7 +628,7 @@ mod tests {
}
}
/// Creates a new pvf which artifact id can be uniquely identified by the given number.
/// Creates a new PVF which artifact id can be uniquely identified by the given number.
fn artifact_id(descriminator: u32) -> ArtifactId {
Pvf::from_discriminator(descriminator).as_artifact_id()
}
......
......@@ -23,14 +23,14 @@
//!
//! Then using the handle the client can send two types of requests:
//!
//! (a) PVF execution. This accepts the PVF [params][`polkadot_parachain::primitives::ValidationParams`]
//! (a) PVF execution. This accepts the PVF [`params`][`polkadot_parachain::primitives::ValidationParams`]
//! and the PVF [code][`Pvf`], prepares (verifies and compiles) the code, and then executes PVF
//! with the params.
//! with the `params`.
//!
//! (b) Heads up. This request allows to signal that the given PVF may be needed soon and that it
//! should be prepared for execution.
//!
//! The preparation results are cached for some time after they either used or was signalled in heads up.
//! The preparation results are cached for some time after they either used or was signaled in heads up.
//! All requests that depends on preparation of the same PVF are bundled together and will be executed
//! as soon as the artifact is prepared.
//!
......@@ -70,7 +70,7 @@
//!
//! The execute workers will be fed by the requests from the execution queue, which is basically a
//! combination of a path to the compiled artifact and the
//! [params][`polkadot_parachain::primitives::ValidationParams`].
//! [`params`][`polkadot_parachain::primitives::ValidationParams`].
//!
//! Each fixed interval of time a pruning task will run. This task will remove all artifacts that
//! weren't used or received a heads up signal for a while.
......
......@@ -80,7 +80,7 @@ pub enum FromPool {
Spawned(Worker),
/// The given worker either succeeded or failed the given job. Under any circumstances the
/// artifact file has been written. The bool says whether the worker ripped.
/// artifact file has been written. The `bool` says whether the worker ripped.
Concluded(Worker, bool),
/// The given worker ceased to exist.
......
......@@ -530,7 +530,7 @@ mod tests {
use std::task::Poll;
use super::*;
/// Creates a new pvf which artifact id can be uniquely identified by the given number.
/// Creates a new PVF which artifact id can be uniquely identified by the given number.
fn pvf(descriminator: u32) -> Pvf {
Pvf::from_discriminator(descriminator)
}
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment