Unverified Commit 8a6af441 authored by Denis_P's avatar Denis_P 🏑 Committed by GitHub
Browse files

WIP: CI: add spellcheck (#3421)



* CI: add spellcheck

* revert me

* CI: explicit command for spellchecker

* spellcheck: edit misspells

* CI: run spellcheck on diff

* spellcheck: edits

* spellcheck: edit misspells

* spellcheck: add rules

* spellcheck: mv configs

* spellcheck: more edits

* spellcheck: chore

* spellcheck: one more thing

* spellcheck: and another one

* spellcheck: seems like it doesn't get to an end

* spellcheck: new words after rebase

* spellcheck: new words appearing out of nowhere

* chore

* review edits

* more review edits

* more edits

* wonky behavior

* wonky behavior 2

* wonky behavior 3

* change git behavior

* spellcheck: another bunch of new edits

* spellcheck: new words are koming out of nowhere

* CI: finding the master

* CI: fetching master implicitly

* CI: undebug

* new errors

* a bunch of new edits

* and some more

* Update node/core/approval-voting/src/approval_db/v1/mod.rs
Co-authored-by: Andronik Ordian's avatarAndronik Ordian <write@reusable.software>

* Update xcm/xcm-executor/src/assets.rs
Co-authored-by: Andronik Ordian's avatarAndronik Ordian <write@reusable.software>

* Apply suggestions from code review
Co-authored-by: Andronik Ordian's avatarAndronik Ordian <write@reusable.software>

* Suggestions from the code review

* CI: scan only changed files
Co-authored-by: Andronik Ordian's avatarAndronik Ordian <write@reusable.software>
parent 43920cd7
Pipeline #147422 canceled with stages
in 7 minutes and 46 seconds
......@@ -164,6 +164,18 @@ check-runtime-benchmarks:
- ./scripts/gitlab/check_runtime_benchmarks.sh
- sccache -s
spellcheck:
stage: test
<<: *docker-env
<<: *rules-pr-only
script:
- cargo spellcheck --version
# compare with the commit parent to the PR, given it's from a default branch
- git fetch origin +${CI_DEFAULT_BRANCH}:${CI_DEFAULT_BRANCH}
- time cargo spellcheck check -vvv --cfg=scripts/gitlab/spellcheck.toml --checkers hunspell --code 1
-r $(git diff --name-only ${CI_COMMIT_SHA} $(git merge-base ${CI_COMMIT_SHA} ${CI_DEFAULT_BRANCH}))
allow_failure: true
build-adder-collator:
stage: test
<<: *collect-artifacts
......@@ -383,9 +395,9 @@ trigger-simnet:
variables:
TRGR_PROJECT: ${CI_PROJECT_NAME}
TRGR_REF: ${CI_COMMIT_REF_NAME}
# simnet project ID
# Simnet project ID
DWNSTRM_ID: 332
script:
# API trigger for a simnet job, argument value is set in the project variables
# API trigger for a Simnet job, argument value is set in the project variables
- ./scripts/gitlab/trigger_pipeline.sh --simnet-version=${SIMNET_REF}
allow_failure: true
# Polkadot
Implementation of a https://polkadot.network node in Rust based on the Substrate framework.
Implementation of a <https://polkadot.network> node in Rust based on the Substrate framework.
> **NOTE:** In 2018, we split our implementation of "Polkadot" from its development framework
> "Substrate". See the [Substrate][substrate-repo] repo for git history prior to 2018.
......@@ -19,7 +19,7 @@ either run the latest binary from our
[releases](https://github.com/paritytech/polkadot/releases) page, or install
Polkadot from one of our package repositories.
Installation from the debian or rpm repositories will create a `systemd`
Installation from the Debian or rpm repositories will create a `systemd`
service that can be used to run a Polkadot node. This is disabled by default,
and can be started by running `systemctl start polkadot` on demand (use
`systemctl enable polkadot` to make it auto-start after reboot). By default, it
......@@ -207,7 +207,7 @@ You can run a simple single-node development "network" on your machine by runnin
polkadot --dev
```
You can muck around by heading to https://polkadot.js.org/apps and choose "Local Node" from the
You can muck around by heading to <https://polkadot.js.org/apps> and choose "Local Node" from the
Settings menu.
### Local Two-node Testnet
......@@ -246,7 +246,3 @@ Ensure you replace `ALICE_BOOTNODE_ID_HERE` with the node ID from the output of
## License
Polkadot is [GPL 3.0 licensed](LICENSE).
## Important Notice
https://polkadot.network/testnetdisclaimer
......@@ -32,7 +32,7 @@ choosen
config/MS
crypto/MS
customizable/B
debian/M
Debian/M
decodable/MS
DOT/S
doesn
......
......@@ -14,7 +14,7 @@
// You should have received a copy of the GNU General Public License
// along with Parity Bridges Common. If not, see <http://www.gnu.org/licenses/>.
//! Autogenerated weights for {{pallet}}
//! Autogenerated weights for {{cmd.pallet}}
//!
//! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION {{version}}
//! DATE: {{date}}, STEPS: {{cmd.steps}}, REPEAT: {{cmd.repeat}}
......
......@@ -199,7 +199,7 @@ impl frame_system::Config for Runtime {
type BlockLength = bp_millau::BlockLength;
/// The weight of database operations that the runtime can invoke.
type DbWeight = DbWeight;
/// The designated SS58 prefix of this chain.
/// The designated `SS58` prefix of this chain.
type SS58Prefix = SS58Prefix;
/// The set code logic, just the default since we're not a parachain.
type OnSetCode = ();
......@@ -239,7 +239,7 @@ parameter_types! {
}
impl pallet_timestamp::Config for Runtime {
/// A timestamp: milliseconds since the unix epoch.
/// A timestamp: milliseconds since the Unix epoch.
type Moment = u64;
type OnTimestampSet = Aura;
type MinimumPeriod = MinimumPeriod;
......@@ -421,9 +421,9 @@ pub type Header = generic::Header<BlockNumber, Hashing>;
pub type Block = generic::Block<Header, UncheckedExtrinsic>;
/// A Block signed with a Justification
pub type SignedBlock = generic::SignedBlock<Block>;
/// BlockId type as expected by this runtime.
/// `BlockId` type as expected by this runtime.
pub type BlockId = generic::BlockId<Block>;
/// The SignedExtension to the basic transaction logic.
/// The `SignedExtension` to the basic transaction logic.
pub type SignedExtra = (
frame_system::CheckSpecVersion<Runtime>,
frame_system::CheckTxVersion<Runtime>,
......
......@@ -55,8 +55,8 @@ pub struct EthereumTransactionInclusionProof {
///
/// The assumption is that this pair will never appear more than once in
/// transactions included into finalized blocks. This is obviously true
/// for any existing eth-like chain (that keep current tx format), because
/// otherwise transaction can be replayed over and over.
/// for any existing eth-like chain (that keep current transaction format),
/// because otherwise transaction can be replayed over and over.
#[derive(Encode, Decode, PartialEq, RuntimeDebug)]
pub struct EthereumTransactionTag {
/// Account that has locked funds.
......
......@@ -34,8 +34,8 @@ frame_support::parameter_types! {
kovan_validators_configuration();
}
/// Max number of finalized headers to keep. It is equivalent of ~24 hours of
/// finalized blocks on current Kovan chain.
/// Max number of finalized headers to keep. It is equivalent of approximately
/// 24 hours of finalized blocks on current Kovan chain.
const FINALIZED_HEADERS_TO_KEEP: u64 = 20_000;
/// Aura engine configuration for Kovan chain.
......
......@@ -206,7 +206,7 @@ impl frame_system::Config for Runtime {
type BlockLength = bp_rialto::BlockLength;
/// The weight of database operations that the runtime can invoke.
type DbWeight = DbWeight;
/// The designated SS58 prefix of this chain.
/// The designated `SS58` prefix of this chain.
type SS58Prefix = SS58Prefix;
/// The set code logic, just the default since we're not a parachain.
type OnSetCode = ();
......@@ -346,7 +346,7 @@ parameter_types! {
}
impl pallet_timestamp::Config for Runtime {
/// A timestamp: milliseconds since the unix epoch.
/// A timestamp: milliseconds since the Unix epoch.
type Moment = u64;
type OnTimestampSet = Aura;
type MinimumPeriod = MinimumPeriod;
......@@ -531,9 +531,9 @@ pub type Header = generic::Header<BlockNumber, Hashing>;
pub type Block = generic::Block<Header, UncheckedExtrinsic>;
/// A Block signed with a Justification
pub type SignedBlock = generic::SignedBlock<Block>;
/// BlockId type as expected by this runtime.
/// `BlockId` type as expected by this runtime.
pub type BlockId = generic::BlockId<Block>;
/// The SignedExtension to the basic transaction logic.
/// The `SignedExtension` to the basic transaction logic.
pub type SignedExtra = (
frame_system::CheckSpecVersion<Runtime>,
frame_system::CheckTxVersion<Runtime>,
......@@ -1060,7 +1060,7 @@ impl_runtime_apis! {
/// Millau account ownership digest from Rialto.
///
/// The byte vector returned by this function should be signed with a Millau account private key.
/// This way, the owner of `rialto_account_id` on Rialto proves that the 'millau' account private key
/// This way, the owner of `rialto_account_id` on Rialto proves that the Millau account private key
/// is also under his control.
pub fn rialto_to_millau_account_ownership_digest<Call, AccountId, SpecVersion>(
millau_call: &Call,
......
......@@ -110,7 +110,7 @@ impl TPruningStrategy for PruningStrategy {
}
}
/// ChainTime provider
/// `ChainTime` provider
#[derive(Default)]
pub struct ChainTime;
......
......@@ -40,10 +40,10 @@ pub struct ProofParams<Recipient> {
/// When true, recipient must exists before import.
pub recipient_exists: bool,
/// When 0, transaction should have minimal possible size. When this value has non-zero value n,
/// transaction size should be (if possible) near to MIN_SIZE + n * SIZE_FACTOR.
/// transaction size should be (if possible) near to `MIN_SIZE + n * SIZE_FACTOR`.
pub transaction_size_factor: u32,
/// When 0, proof should have minimal possible size. When this value has non-zero value n,
/// proof size should be (if possible) near to MIN_SIZE + n * SIZE_FACTOR.
/// proof size should be (if possible) near to `MIN_SIZE + n * SIZE_FACTOR`.
pub proof_size_factor: u32,
}
......
......@@ -19,7 +19,7 @@
//! The messages are interpreted directly as runtime `Call`. We attempt to decode
//! them and then dispatch as usual. To prevent compatibility issues, the Calls have
//! to include a `spec_version`. This will be checked before dispatch. In the case of
//! a succesful dispatch an event is emitted.
//! a successful dispatch an event is emitted.
#![cfg_attr(not(feature = "std"), no_std)]
#![warn(missing_docs)]
......@@ -52,7 +52,7 @@ pub trait Config<I = DefaultInstance>: frame_system::Config {
/// The overarching event type.
type Event: From<Event<Self, I>> + Into<<Self as frame_system::Config>::Event>;
/// Id of the message. Whenever message is passed to the dispatch module, it emits
/// event with this id + dispatch result. Could be e.g. (LaneId, MessageNonce) if
/// event with this id + dispatch result. Could be e.g. (`LaneId`, `MessageNonce`) if
/// it comes from the messages module.
type MessageId: Parameter;
/// Type of account ID on source chain.
......@@ -77,13 +77,13 @@ pub trait Config<I = DefaultInstance>: frame_system::Config {
/// The type that is used to wrap the `Self::Call` when it is moved over bridge.
///
/// The idea behind this is to avoid `Call` conversion/decoding until we'll be sure
/// that all other stuff (like `spec_version`) is ok. If we would try to decode
/// that all other stuff (like `spec_version`) is OK. If we would try to decode
/// `Call` which has been encoded using previous `spec_version`, then we might end
/// up with decoding error, instead of `MessageVersionSpecMismatch`.
type EncodedCall: Decode + Encode + Into<Result<<Self as Config<I>>::Call, ()>>;
/// A type which can be turned into an AccountId from a 256-bit hash.
/// A type which can be turned into an `AccountId` from a 256-bit hash.
///
/// Used when deriving target chain AccountIds from source chain AccountIds.
/// Used when deriving target chain `AccountId`s from source chain `AccountId`s.
type AccountIdConverter: sp_runtime::traits::Convert<sp_core::hash::H256, Self::AccountId>;
}
......
......@@ -16,7 +16,7 @@
//! Pallet for checking GRANDPA Finality Proofs.
//!
//! Adapted copy of substrate/client/finality-grandpa/src/justification.rs. If origin
//! Adapted copy of `substrate/client/finality-grandpa/src/justification.rs`. If origin
//! will ever be moved to the sp_finality_grandpa, we should reuse that implementation.
use codec::{Decode, Encode};
......@@ -57,7 +57,7 @@ pub enum Error {
InvalidJustificationTarget,
/// The authority has provided an invalid signature.
InvalidAuthoritySignature,
/// The justification contains precommit for header that is not a descendant of the commit header.
/// The justification contains pre-commit for header that is not a descendant of the commit header.
PrecommitIsNotCommitDescendant,
/// The cumulative weight of all votes in the justification is not enough to justify commit
/// header finalization.
......@@ -119,7 +119,7 @@ where
if signed.precommit.target_number < justification.commit.target_number {
return Err(Error::PrecommitIsNotCommitDescendant);
}
// all precommits must be for target block descendents
// all precommits must be for target block descendants
chain = chain.ensure_descendant(&justification.commit.target_hash, &signed.precommit.target_hash)?;
// since we know now that the precommit target is the descendant of the justification target,
// we may increase 'weight' of the justification target
......@@ -154,7 +154,7 @@ where
}
// check that the cumulative weight of validators voted for the justification target (or one
// of its descendents) is larger than required threshold.
// of its descendants) is larger than required threshold.
let threshold = authorities_set.threshold().0.into();
if cumulative_weight >= threshold {
Ok(())
......
......@@ -65,7 +65,7 @@ pub enum Subcommand {
#[cfg(not(feature = "try-runtime"))]
TryRuntime,
/// Key management cli utilities
/// Key management CLI utilities
Key(sc_cli::KeySubcommand),
}
......
......@@ -81,11 +81,11 @@ impl sp_std::fmt::Debug for CandidateHash {
pub type Nonce = u32;
/// The balance of an account.
/// 128-bits (or 38 significant decimal figures) will allow for 10m currency (10^7) at a resolution
/// to all for one second's worth of an annualised 50% reward be paid to a unit holder (10^11 unit
/// denomination), or 10^18 total atomic units, to grow at 50%/year for 51 years (10^9 multiplier)
/// for an eventual total of 10^27 units (27 significant decimal figures).
/// We round denomination to 10^12 (12 sdf), and leave the other redundancy at the upper end so
/// 128-bits (or 38 significant decimal figures) will allow for 10 m currency (`10^7`) at a resolution
/// to all for one second's worth of an annualised 50% reward be paid to a unit holder (`10^11` unit
/// denomination), or `10^18` total atomic units, to grow at 50%/year for 51 years (`10^9` multiplier)
/// for an eventual total of `10^27` units (27 significant decimal figures).
/// We round denomination to `10^12` (12 SDF), and leave the other redundancy at the upper end so
/// that 32 bits may be multiplied with a balance in 128 bits without worrying about overflow.
pub type Balance = u128;
......@@ -99,7 +99,7 @@ pub type BlockId = generic::BlockId<Block>;
/// Opaque, encoded, unchecked extrinsic.
pub use sp_runtime::OpaqueExtrinsic as UncheckedExtrinsic;
/// The information that goes alongside a transfer_into_parachain operation. Entirely opaque, it
/// The information that goes alongside a `transfer_into_parachain` operation. Entirely opaque, it
/// will generally be used for identifying the reason for the transfer. Typically it will hold the
/// destination account to which the transfer should be credited. If still more information is
/// needed, then this should be a hash with the pre-image presented via an off-chain mechanism on
......@@ -144,7 +144,7 @@ pub struct OutboundHrmpMessage<Id> {
pub data: sp_std::vec::Vec<u8>,
}
/// V1 primitives.
/// `V1` primitives.
pub mod v1 {
pub use super::*;
}
......@@ -20,7 +20,7 @@ One particular subsystem (subsystem under test) interacts with a
mocked overseer that is made to assert incoming and outgoing messages
of the subsystem under test.
This is largely present today, but has some fragmentation in the evolved
integration test implementation. A proc-macro/macro_rules would allow
integration test implementation. A `proc-macro`/`macro_rules` would allow
for more consistent implementation and structure.
#### Behavior tests (3)
......@@ -29,27 +29,25 @@ Launching small scale networks, with multiple adversarial nodes without any furt
This should include tests around the thresholds in order to evaluate the error handling once certain
assumed invariants fail.
For this purpose based on `AllSubsystems` and proc-macro `AllSubsystemsGen`.
For this purpose based on `AllSubsystems` and `proc-macro` `AllSubsystemsGen`.
This assumes a simplistic test runtime.
#### Testing at scale (4)
Launching many nodes with configurable network speed and node features in a cluster of nodes.
At this scale the [`simnet`][simnet] comes into play which launches a full cluster of nodes.
At this scale the [Simnet][simnet] comes into play which launches a full cluster of nodes.
The scale is handled by spawning a kubernetes cluster and the meta description
is covered by [`gurke`][gurke].
Asserts are made using grafana rules, based on the existing prometheus metrics. This can
is covered by [Gurke][Gurke].
Asserts are made using Grafana rules, based on the existing prometheus metrics. This can
be extended by adding an additional service translating `jaeger` spans into addition
prometheus avoiding additional polkadot source changes.
_Behavior tests_ and _testing at scale_ have naturally soft boundary.
The most significant difference is the presence of a real network and
the number of nodes, since a single host often not capable to run
multiple nodes at once.
---
## Coverage
......@@ -93,15 +91,15 @@ miniserve -r ./coverage
grcov . --binary-path ./target/debug/ -s . -t lcov --branch --ignore-not-existing --ignore "/*" -o lcov.info
```
The test coverage in `lcov` can the be published to <codecov.io>.
The test coverage in `lcov` can the be published to <https://codecov.io>.
```sh
bash <(curl -s https://codecov.io/bash) -f lcov.info
```
or just printed as part of the PR using a github action i.e. [jest-lcov-reporter](https://github.com/marketplace/actions/jest-lcov-reporter).
or just printed as part of the PR using a github action i.e. [`jest-lcov-reporter`](https://github.com/marketplace/actions/jest-lcov-reporter).
For full examples on how to use [grcov /w polkadot specifics see the github repo](https://github.com/mozilla/grcov#coverallscodecov-output).
For full examples on how to use [`grcov` /w polkadot specifics see the github repo](https://github.com/mozilla/grcov#coverallscodecov-output).
## Fuzzing
......@@ -146,13 +144,12 @@ Requirements:
* spawn nodes with preconfigured behaviors
* allow multiple types of configuration to be specified
* allow extensability via external crates
* allow extendability via external crates
* ...
---
## Implementation of different behavior strain nodes.
## Implementation of different behavior strain nodes
### Goals
......@@ -166,21 +163,21 @@ well as shorting the block time and epoch times down to a few `100ms` and a few
#### MVP
A simple small scale builder pattern would suffice for stage one impl of allowing to
A simple small scale builder pattern would suffice for stage one implementation of allowing to
replace individual subsystems.
An alternative would be to harness the existing `AllSubsystems` type
and replace the subsystems as needed.
#### Full proc-macro impl
#### Full `proc-macro` implementation
`Overseer` is a common pattern.
It could be extracted as proc macro and generative proc-macro.
It could be extracted as `proc` macro and generative `proc-macro`.
This would replace the `AllSubsystems` type as well as implicitly create
the `AllMessages` enum as `AllSubsystemsGen` does today.
The implementation is yet to be completed, see the [implementation PR](https://github.com/paritytech/polkadot/pull/2962) for details.
##### Declare an overseer impl
##### Declare an overseer implementation
```rust
struct BehaveMaleficient;
......@@ -237,8 +234,8 @@ fn main() -> eyre::Result<()> {
#### Simnet
Spawn a kubernetes cluster based on a meta description using [gurke] with the
[simnet] scripts.
Spawn a kubernetes cluster based on a meta description using [Gurke] with the
[Simnet] scripts.
Coordinated attacks of multiple nodes or subsystems must be made possible via
a side-channel, that is out of scope for this document.
......@@ -246,11 +243,11 @@ a side-channel, that is out of scope for this document.
The individual node configurations are done as targets with a particular
builder configuration.
#### Behavior tests w/o simnet
#### Behavior tests w/o Simnet
Commonly this will require multiple nodes, and most machines are limited to
running two or three nodes concurrently.
Hence, this is not the common case and is just an impl _idea_.
Hence, this is not the common case and is just an implementation _idea_.
```rust
behavior_testcase!{
......@@ -263,5 +260,5 @@ behavior_testcase!{
}
```
[gurke]: https://github.com/paritytech/gurke
[Gurke]: https://github.com/paritytech/gurke
[simnet]: https://github.com/paritytech/simnet_scripts
......@@ -20,7 +20,7 @@
//! The way we accomplish this is by erasure coding the data into n pieces
//! and constructing a merkle root of the data.
//!
//! Each of n validators stores their piece of data. We assume n=3f+k, 0 < k ≤ 3.
//! Each of n validators stores their piece of data. We assume `n = 3f + k`, `0 < k ≤ 3`.
//! f is the maximum number of faulty validators in the system.
//! The data is coded so any f+1 chunks can be used to reconstruct the full data.
......@@ -58,7 +58,7 @@ pub enum Error {
/// Chunks not of uniform length or the chunks are empty.
#[error("Chunks are not unform, mismatch in length or are zero sized")]
NonUniformChunks,
/// An uneven byte-length of a shard is not valid for GF(2^16) encoding.
/// An uneven byte-length of a shard is not valid for `GF(2^16)` encoding.
#[error("Uneven length is not valid for field GF(2^16)")]
UnevenLength,
/// Chunk index out of bounds.
......
// Copyright 2017-2020 Parity Technologies (UK) Ltd.
// Copyright 2017-2021 Parity Technologies (UK) Ltd.
// This file is part of Polkadot.
// Polkadot is free software: you can redistribute it and/or modify
......
......@@ -443,7 +443,7 @@ struct MetricsInner {
new_activations_per_availability_core: prometheus::Histogram,
}
/// CollationGenerationSubsystem metrics.
/// `CollationGenerationSubsystem` metrics.
#[derive(Default, Clone)]
pub struct Metrics(Option<MetricsInner>);
......
......@@ -297,8 +297,8 @@ fn filled_tranche_iterator<'a>(
pre.chain(approval_entries_filled).chain(post)
}
/// Computes the number of no_show validators in a set of assignments given the relevant approvals
/// and tick parameters. This method also returns the next tick at which a no_show will occur
/// Computes the number of `no_show` validators in a set of assignments given the relevant approvals
/// and tick parameters. This method also returns the next tick at which a `no_show` will occur
/// amongst the set of validators that have not submitted an approval.
///
/// If the returned `next_no_show` is not None, there are two possible cases for the value of
......
......@@ -38,7 +38,7 @@ const STORED_BLOCKS_KEY: &[u8] = b"Approvals_StoredBlocks";
#[cfg(test)]
pub mod tests;
/// DbBackend is a concrete implementation of the higher-level Backend trait
/// `DbBackend` is a concrete implementation of the higher-level Backend trait
pub struct DbBackend {
inner: Arc<dyn KeyValueDB>,
config: Config,
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment