Unverified Commit 8a6af441 authored by Denis_P's avatar Denis_P 🏑 Committed by GitHub
Browse files

WIP: CI: add spellcheck (#3421)



* CI: add spellcheck

* revert me

* CI: explicit command for spellchecker

* spellcheck: edit misspells

* CI: run spellcheck on diff

* spellcheck: edits

* spellcheck: edit misspells

* spellcheck: add rules

* spellcheck: mv configs

* spellcheck: more edits

* spellcheck: chore

* spellcheck: one more thing

* spellcheck: and another one

* spellcheck: seems like it doesn't get to an end

* spellcheck: new words after rebase

* spellcheck: new words appearing out of nowhere

* chore

* review edits

* more review edits

* more edits

* wonky behavior

* wonky behavior 2

* wonky behavior 3

* change git behavior

* spellcheck: another bunch of new edits

* spellcheck: new words are koming out of nowhere

* CI: finding the master

* CI: fetching master implicitly

* CI: undebug

* new errors

* a bunch of new edits

* and some more

* Update node/core/approval-voting/src/approval_db/v1/mod.rs

Co-authored-by: Andronik Ordian's avatarAndronik Ordian <write@reusable.software>

* Update xcm/xcm-executor/src/assets.rs

Co-authored-by: Andronik Ordian's avatarAndronik Ordian <write@reusable.software>

* Apply suggestions from code review

Co-authored-by: Andronik Ordian's avatarAndronik Ordian <write@reusable.software>

* Suggestions from the code review

* CI: scan only changed files

Co-authored-by: Andronik Ordian's avatarAndronik Ordian <write@reusable.software>
parent 43920cd7
Pipeline #147422 canceled with stages
in 7 minutes and 46 seconds
......@@ -56,7 +56,7 @@ impl<N: Network, AD: AuthorityDiscovery> Service<N, AD> {
///
/// This method will also disconnect from previously connected validators not in the `validator_ids` set.
/// it takes `network_service` and `authority_discovery_service` by value
/// and returns them as a workaround for the Future: Send requirement imposed by async fn impl.
/// and returns them as a workaround for the Future: Send requirement imposed by async function implementation.
pub async fn on_request(
&mut self,
validator_ids: Vec<AuthorityDiscoveryId>,
......
......@@ -58,7 +58,7 @@ const COST_APPARENT_FLOOD: Rep = Rep::CostMinor("Message received when previous
///
/// This is to protect from a single slow validator preventing collations from happening.
///
/// With a collation size of 5Meg and bandwidth of 500Mbit/s (requirement for Kusama validators),
/// With a collation size of 5MB and bandwidth of 500Mbit/s (requirement for Kusama validators),
/// the transfer should be possible within 0.1 seconds. 400 milliseconds should therefore be
/// plenty and should be low enough for later validators to still be able to finish on time.
///
......
......@@ -863,7 +863,7 @@ fn collators_reject_declare_messages() {
///
/// After the first response is done, the passed in lambda will be called with the receiver for the
/// next response and a sender for giving feedback on the response of the first transmission. After
/// the lamda has passed it is assumed that the second response is sent, which is checked by this
/// the lambda has passed it is assumed that the second response is sent, which is checked by this
/// function.
///
/// The lambda can trigger occasions on which the second response should be sent, like timeouts,
......
......@@ -71,7 +71,7 @@ const BENEFIT_NOTIFY_GOOD: Rep = Rep::BenefitMinor("A collator was noted good by
///
/// This is to protect from a single slow collator preventing collations from happening.
///
/// With a collation size of 5Meg and bandwidth of 500Mbit/s (requirement for Kusama validators),
/// With a collation size of 5MB and bandwidth of 500Mbit/s (requirement for Kusama validators),
/// the transfer should be possible within 0.1 seconds. 400 milliseconds should therefore be
/// plenty, even with multiple heads and should be low enough for later collators to still be able
/// to finish on time.
......@@ -718,7 +718,7 @@ where
}
/// A peer's view has changed. A number of things should be done:
/// - Ongoing collation requests have to be cancelled.
/// - Ongoing collation requests have to be canceled.
/// - Advertisements by this peer that are no longer relevant have to be removed.
async fn handle_peer_view_change(
state: &mut State,
......@@ -738,7 +738,7 @@ async fn handle_peer_view_change(
/// This function will
/// - Check for duplicate requests.
/// - Check if the requested collation is in our view.
/// - Update PerRequest records with the `result` field if necessary.
/// - Update `PerRequest` records with the `result` field if necessary.
/// And as such invocations of this function may rely on that.
async fn request_collation<Context>(
ctx: &mut Context,
......
......@@ -62,15 +62,15 @@ pub enum Fatal {
#[error("Spawning subsystem task failed")]
SpawnTask(#[source] SubsystemError),
/// DisputeSender mpsc receiver exhausted.
/// `DisputeSender` mpsc receiver exhausted.
#[error("Erasure chunk requester stream exhausted")]
SenderExhausted,
/// Errors coming from runtime::Runtime.
/// Errors coming from `runtime::Runtime`.
#[error("Error while accessing runtime information")]
Runtime(#[from] runtime::Fatal),
/// Errors coming from DisputeSender
/// Errors coming from `DisputeSender`
#[error("Error while accessing runtime information")]
Sender(#[from] sender::Fatal),
}
......@@ -78,7 +78,7 @@ pub enum Fatal {
/// Non-fatal errors of this subsystem.
#[derive(Debug, Error)]
pub enum NonFatal {
/// Errors coming from DisputeSender
/// Errors coming from `DisputeSender`
#[error("Error while accessing runtime information")]
Sender(#[from] sender::NonFatal),
}
......
......@@ -103,7 +103,7 @@ enum MuxedMessage {
///
/// - We need to make sure responses are actually sent (therefore we need to await futures
/// promptly).
/// - We need to update banned_peers accordingly to the result.
/// - We need to update `banned_peers` accordingly to the result.
ConfirmedImport(NonFatalResult<(PeerId, ImportStatementsResult)>),
/// A new request has arrived and should be handled.
......
......@@ -56,7 +56,7 @@ pub const ALICE_INDEX: ValidatorIndex = ValidatorIndex(1);
lazy_static! {
/// Mocked AuthorityDiscovery service.
/// Mocked `AuthorityDiscovery` service.
pub static ref MOCK_AUTHORITY_DISCOVERY: MockAuthorityDiscovery = MockAuthorityDiscovery::new();
// Creating an innocent looking `SessionInfo` is really expensive in a debug build. Around
// 700ms on my machine, We therefore cache those keys here:
......@@ -80,7 +80,7 @@ pub static ref MOCK_SESSION_INFO: SessionInfo =
..Default::default()
};
/// SessionInfo for the second session. (No more validators, but two more authorities.
/// `SessionInfo` for the second session. (No more validators, but two more authorities.
pub static ref MOCK_NEXT_SESSION_INFO: SessionInfo =
SessionInfo {
discovery_keys:
......
......@@ -199,12 +199,12 @@ where
failed_rx
}
/// We partition the list of all sorted `authorities` into sqrt(len) groups of sqrt(len) size
/// We partition the list of all sorted `authorities` into `sqrt(len)` groups of `sqrt(len)` size
/// and form a matrix where each validator is connected to all validators in its row and column.
/// This is similar to [web3] research proposed topology, except for the groups are not parachain
/// This is similar to `[web3]` research proposed topology, except for the groups are not parachain
/// groups (because not all validators are parachain validators and the group size is small),
/// but formed randomly via BABE randomness from two epochs ago.
/// This limits the amount of gossip peers to 2 * sqrt(len) and ensures the diameter of 2.
/// This limits the amount of gossip peers to 2 * `sqrt(len)` and ensures the diameter of 2.
///
/// [web3]: https://research.web3.foundation/en/latest/polkadot/networking/3-avail-valid.html#topology
async fn update_gossip_topology<Context>(
......
......@@ -284,7 +284,7 @@ impl View {
/// Check if two views have the same heads.
///
/// Equivalent to the `PartialEq` fn,
/// Equivalent to the `PartialEq` function,
/// but ignores the `finalized_number` field.
pub fn check_heads_eq(&self, other: &Self) -> bool {
self.heads == other.heads
......@@ -325,7 +325,7 @@ pub mod v1 {
/// Seconded statement with large payload (e.g. containing a runtime upgrade).
///
/// We only gossip the hash in that case, actual payloads can be fetched from sending node
/// via req/response.
/// via request/response.
#[codec(index = 1)]
LargeStatement(StatementMetadata),
}
......
......@@ -16,18 +16,18 @@
//! Overview over request/responses as used in `Polkadot`.
//!
//! enum Protocol .... List of all supported protocols.
//! `enum Protocol` .... List of all supported protocols.
//!
//! enum Requests .... List of all supported requests, each entry matches one in protocols, but
//! `enum Requests` .... List of all supported requests, each entry matches one in protocols, but
//! has the actual request as payload.
//!
//! struct IncomingRequest .... wrapper for incoming requests, containing a sender for sending
//! `struct IncomingRequest` .... wrapper for incoming requests, containing a sender for sending
//! responses.
//!
//! struct OutgoingRequest .... wrapper for outgoing requests, containing a sender used by the
//! `struct OutgoingRequest` .... wrapper for outgoing requests, containing a sender used by the
//! networking code for delivering responses/delivery errors.
//!
//! trait `IsRequest` .... A trait describing a particular request. It is used for gathering meta
//! `trait IsRequest` .... A trait describing a particular request. It is used for gathering meta
//! data, like what is the corresponding response type.
//!
//! Versioned (v1 module): The actual requests and responses as sent over the network.
......@@ -72,7 +72,7 @@ pub enum Protocol {
/// Minimum bandwidth we expect for validators - 500Mbit/s is the recommendation, so approximately
/// 50Meg bytes per second:
/// 50MB per second:
const MIN_BANDWIDTH_BYTES: u64 = 50 * 1024 * 1024;
/// Default request timeout in seconds.
......
......@@ -79,7 +79,7 @@ impl Requests {
///
/// Note: `Requests` is just an enum collecting all supported requests supported by network
/// bridge, it is never sent over the wire. This function just encodes the individual requests
/// contained in the enum.
/// contained in the `enum`.
pub fn encode_request(self) -> (Protocol, OutgoingRequest<Vec<u8>>) {
match self {
Self::ChunkFetching(r) => r.encode_request(),
......@@ -219,7 +219,7 @@ impl From<oneshot::Canceled> for RequestError {
/// `IncomingRequest`s are produced by `RequestMultiplexer` on behalf of the network bridge.
#[derive(Debug)]
pub struct IncomingRequest<Req> {
/// PeerId of sending peer.
/// `PeerId` of sending peer.
pub peer: PeerId,
/// The sent request.
pub payload: Req,
......@@ -227,7 +227,7 @@ pub struct IncomingRequest<Req> {
pub pending_response: OutgoingResponseSender<Req>,
}
/// Sender for sendinb back responses on an `IncomingRequest`.
/// Sender for sending back responses on an `IncomingRequest`.
#[derive(Debug)]
pub struct OutgoingResponseSender<Req>{
pending_response: oneshot::Sender<netconfig::OutgoingResponse>,
......@@ -241,9 +241,9 @@ where
{
/// Send the response back.
///
/// On success we return Ok(()), on error we return the not sent `Response`.
/// On success we return `Ok(())`, on error we return the not sent `Response`.
///
/// netconfig::OutgoingResponse exposes a way of modifying the peer's reputation. If needed we
/// `netconfig::OutgoingResponse` exposes a way of modifying the peer's reputation. If needed we
/// can change this function to expose this feature as well.
pub fn send_response(self, resp: Req::Response) -> Result<(), Req::Response> {
self.pending_response
......@@ -375,7 +375,7 @@ where
}
}
/// Future for actually receiving a typed response for an OutgoingRequest.
/// Future for actually receiving a typed response for an `OutgoingRequest`.
async fn receive_response<Req>(
rec: oneshot::Receiver<Result<Vec<u8>, network::RequestFailure>>,
) -> OutgoingResult<Req::Response>
......
......@@ -172,7 +172,7 @@ impl IsRequest for AvailableDataFetchingRequest {
pub struct StatementFetchingRequest {
/// Data needed to locate and identify the needed statement.
pub relay_parent: Hash,
/// Hash of candidate that was used create the CommitedCandidateRecept.
/// Hash of candidate that was used create the `CommitedCandidateRecept`.
pub candidate_hash: CandidateHash,
}
......
......@@ -17,7 +17,7 @@
//! The Statement Distribution Subsystem.
//!
//! This is responsible for distributing signed statements about candidate
//! validity amongst validators.
//! validity among validators.
#![deny(unused_crate_dependencies)]
#![warn(missing_docs)]
......@@ -208,7 +208,7 @@ struct PeerRelayParentKnowledge {
/// How many large statements this peer already sent us.
///
/// Flood protection for large statements is rather hard and as soon as we get
/// https://github.com/paritytech/polkadot/issues/2979 implemented also no longer necessary.
/// `https://github.com/paritytech/polkadot/issues/2979` implemented also no longer necessary.
/// Reason: We keep messages around until we fetched the payload, but if a node makes up
/// statements and never provides the data, we will keep it around for the slot duration. Not
/// even signature checking would help, as the sender, if a validator, can just sign arbitrary
......@@ -290,7 +290,7 @@ impl PeerRelayParentKnowledge {
/// Provide the maximum message count that we can receive per candidate. In practice we should
/// not receive more statements for any one candidate than there are members in the group assigned
/// to that para, but this maximum needs to be lenient to account for equivocations that may be
/// cross-group. As such, a maximum of 2 * n_validators is recommended.
/// cross-group. As such, a maximum of 2 * `n_validators` is recommended.
///
/// This returns an error if the peer should not have sent us this message according to protocol
/// rules for flood protection.
......@@ -459,7 +459,7 @@ impl PeerData {
/// Provide the maximum message count that we can receive per candidate. In practice we should
/// not receive more statements for any one candidate than there are members in the group assigned
/// to that para, but this maximum needs to be lenient to account for equivocations that may be
/// cross-group. As such, a maximum of 2 * n_validators is recommended.
/// cross-group. As such, a maximum of 2 * `n_validators` is recommended.
///
/// This returns an error if the peer should not have sent us this message according to protocol
/// rules for flood protection.
......
......@@ -45,7 +45,7 @@ pub enum RequesterMessage {
candidate_hash: CandidateHash,
tx: oneshot::Sender<Vec<PeerId>>
},
/// Fetching finished, ask for verification. If verification failes, task will continue asking
/// Fetching finished, ask for verification. If verification fails, task will continue asking
/// peers for data.
Finished {
/// Relay parent this candidate is in the context of.
......
......@@ -42,7 +42,7 @@ pub(crate) fn impl_misc(info: &OverseerInfo) -> proc_macro2::TokenStream {
signals_received: SignalsReceived,
}
/// impl for wrapping message type...
/// implementation for wrapping message type...
#[#support_crate ::async_trait]
impl SubsystemSender< #wrapper_message > for #subsystem_sender_name {
async fn send_message(&mut self, msg: #wrapper_message) {
......
......@@ -98,7 +98,7 @@ pub(crate) fn impl_overseer_struct(info: &OverseerInfo) -> proc_macro2::TokenStr
}
impl #generics #overseer_name #generics #where_clause {
/// Send the given signal, a terminatin signal, to all subsystems
/// Send the given signal, a termination signal, to all subsystems
/// and wait for all subsystems to go down.
///
/// The definition of a termination signal is up to the user and
......
......@@ -86,14 +86,14 @@ pub(crate) struct SubSysField {
/// Type to be consumed by the subsystem.
pub(crate) consumes: Path,
/// If `no_dispatch` is present, if the message is incoming via
/// an extern `Event`, it will not be dispatched to all subsystems.
/// an `extern` `Event`, it will not be dispatched to all subsystems.
pub(crate) no_dispatch: bool,
/// If the subsystem implementation is blocking execution and hence
/// has to be spawned on a separate thread or thread pool.
pub(crate) blocking: bool,
/// The subsystem is a work in progress.
/// Avoids dispatching `Wrapper` type messages, but generates the variants.
/// Does not require the subsystem to be instanciated with the builder pattern.
/// Does not require the subsystem to be instantiated with the builder pattern.
pub(crate) wip: bool,
}
......@@ -133,7 +133,7 @@ pub(crate) struct SubSystemTags {
pub(crate) attrs: Vec<Attribute>,
#[allow(dead_code)]
pub(crate) no_dispatch: bool,
/// The subsystem is WIP, only generate the `Wrapper` variant, but do not forward messages
/// The subsystem is in progress, only generate the `Wrapper` variant, but do not forward messages
/// and also not include the subsystem in the list of subsystems.
pub(crate) wip: bool,
pub(crate) blocking: bool,
......
......@@ -225,7 +225,7 @@ pub trait AnnotateErrorOrigin: 'static + Send + Sync + std::error::Error {
/// An asynchronous subsystem task..
///
/// In essence it's just a newtype wrapping a `BoxFuture`.
/// In essence it's just a new type wrapping a `BoxFuture`.
pub struct SpawnedSubsystem<E>
where
E: std::error::Error
......@@ -366,12 +366,12 @@ impl<Signal, Message> From<Signal> for FromOverseer<Message, Signal> {
#[async_trait::async_trait]
pub trait SubsystemContext: Send + 'static {
/// The message type of this context. Subsystems launched with this context will expect
/// to receive messages of this type. Commonly uses the wrapping enum commonly called
/// to receive messages of this type. Commonly uses the wrapping `enum` commonly called
/// `AllMessages`.
type Message: std::fmt::Debug + Send + 'static;
/// And the same for signals.
type Signal: std::fmt::Debug + Send + 'static;
/// The overarching all messages enum.
/// The overarching all messages `enum`.
/// In some cases can be identical to `Self::Message`.
type AllMessages: From<Self::Message> + Send + 'static;
/// The sender type as provided by `sender()` and underlying.
......
......@@ -34,7 +34,7 @@ struct MetricsInner {
}
/// A sharable metrics type for usage with the overseer.
/// A shareable metrics type for usage with the overseer.
#[derive(Default, Clone)]
pub struct Metrics(Option<MetricsInner>);
......
......@@ -17,7 +17,7 @@
//! Legacy way of defining subsystems.
//!
//! In the future, everything should be set up using the generated
//! overeseer builder pattern instead.
//! overseer builder pattern instead.
use polkadot_node_subsystem_types::errors::SubsystemError;
use polkadot_overseer_gen::{
......@@ -170,7 +170,7 @@ impl<CV, CB, SD, AD, AR, BS, BD, P, RA, AS, NB, CA, CG, CP, ApD, ApV, GS>
}
}
/// Reference every indidviudal subsystem.
/// Reference every individual subsystem.
pub fn as_ref(&self) -> AllSubsystems<&'_ CV, &'_ CB, &'_ SD, &'_ AD, &'_ AR, &'_ BS, &'_ BD, &'_ P, &'_ RA, &'_ AS, &'_ NB, &'_ CA, &'_ CG, &'_ CP, &'_ ApD, &'_ ApV, &'_ GS> {
AllSubsystems {
candidate_validation: &self.candidate_validation,
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment