Skip to content
GitLab
Projects
Groups
Snippets
/
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Sign in
Toggle navigation
Menu
Open sidebar
parity
Mirrored projects
polkadot
Commits
7fc953f3
Commit
7fc953f3
authored
Dec 17, 2020
by
Cecile Tonglet
Browse files
Merge commit
d3a0c571
(no conflict)
parents
9a209644
d3a0c571
Changes
160
Expand all
Hide whitespace changes
Inline
Side-by-side
.github/ISSUE_TEMPLATE/release.md
View file @
7fc953f3
...
...
@@ -8,23 +8,31 @@ title: Polkadot {{ env.VERSION }} Release checklist
This is the release checklist for Polkadot {{ env.VERSION }}.
**All**
following
checks should be completed before publishing a new release of the
Polkadot/Kusama/Westend runtime or client. The current release candidate can be
checked out with
`git checkout {{ env.VERSION }}`
checked out with
`git checkout
release-
{{ env.VERSION }}`
### Runtime Releases
These checks should be performed on the codebase prior to forking to a release-
candidate branch.
-
[
] Verify [`spec_version`
](
#spec-version
)
has been incremented since the
last release for any native runtimes from any existing use on public
(non-private/test) networks.
-
[
] Verify [new migrations
](
#new-migrations
)
complete successfully, and the
runtime state is correctly updated.
-
[
] Verify previously [completed migrations
](
#old-migrations-removed
)
are
removed.
removed
for any public (non-private/test) networks
.
-
[
] Verify pallet and [extrinsic ordering
](
#extrinsic-ordering
)
has stayed
the same. Bump
`transaction_version`
if not.
-
[ ] Verify new extrinsics have been correctly whitelisted/blacklisted for
[
proxy filters
](
#proxy-filtering
)
.
-
[
] Verify [benchmarks
](
#benchmarks
)
have been updated for any modified
runtime logic.
The following checks can be performed after we have forked off to the release-
candidate branch.
-
[
] Verify [new migrations
](
#new-migrations
)
complete successfully, and the
runtime state is correctly updated for any public (non-private/test)
networks.
-
[
] Verify [Polkadot JS API
](
#polkadot-js
)
are up to date with the latest
runtime changes.
...
...
@@ -59,7 +67,8 @@ Add any necessary assets to the release. They should include:
The release notes should list:
-
The priority of the release (i.e., how quickly users should upgrade)
-
The priority of the release (i.e., how quickly users should upgrade) - this is
based on the max priority of any
*client*
changes.
-
Which native runtimes and their versions are included
-
The proposal hashes of the runtimes as built with
[
srtool
](
https://gitlab.com/chevdor/srtool
)
...
...
@@ -77,16 +86,17 @@ A runtime upgrade must bump the spec number. This may follow a pattern with the
client release (e.g. runtime v12 corresponds to v0.8.12, even if the current
runtime is not v11).
### Old Migrations Removed
Any previous
`on_runtime_upgrade`
functions from old upgrades must be removed
to prevent them from executing a second time. The
`on_runtime_upgrade`
function
can be found in
`runtime/<runtime>/src/lib.rs`
.
### New Migrations
Ensure that any migrations that are required due to storage or logic changes
are included in the
`on_runtime_upgrade`
function of the appropriate pallets.
### Old Migrations Removed
Any previous
`on_runtime_upgrade`
functions from old upgrades must be removed
to prevent them from executing a second time.
### Extrinsic Ordering
Offline signing libraries depend on a consistent ordering of call indices and
...
...
@@ -94,6 +104,23 @@ functions. Compare the metadata of the current and new runtimes and ensure that
the
`module index, call index`
tuples map to the same set of functions. In case
of a breaking change, increase
`transaction_version`
.
To verify the order has not changed:
1.
Download the latest release-candidate binary either from the draft-release
on Github, or
[
AWS
](
https://releases.parity.io/polkadot/x86_64-debian:stretch/{{
env.VERSION }}-rc1/polkadot)
(adjust the rc in this URL as necessary).
2.
Run the release-candidate binary using a local chain:
`./polkadot --chain=polkadot-local`
or
`./polkadot --chain=kusama.local`
3.
Use
[
`polkadot-js-tools`
](
https://github.com/polkadot-js/tools
)
to compare
the metadata:
-
For Polkadot:
`docker run --network host jacogr/polkadot-js-tools metadata wss://rpc.polkadot.io ws://localhost:9944`
-
For Kusama:
`docker run --network host jacogr/polkadot-js-tools metadata wss://kusama-rpc.polkadot.io ws://localhost:9944`
4.
Things to look for in the output are lines like:
-
`[Identity] idx 28 -> 25 (calls 15)`
- indicates the index for
`Identity`
has changed
-
`[+] Society, Recovery`
- indicates the new version includes 2 additional modules/pallets.
-
If no indices have changed, every modules line should look something like
`[Identity] idx 25 (calls 15)`
Note: Adding new functions to the runtime does not constitute a breaking change
as long as they are added to the end of a pallet (i.e., does not break any
other call index).
...
...
.github/workflows/publish-draft-release.yml
View file @
7fc953f3
...
...
@@ -139,5 +139,5 @@ jobs:
with
:
room_id
:
${{ secrets.INTERNAL_POLKADOT_MATRIX_ROOM_ID }}
access_token
:
${{ secrets.MATRIX_ACCESS_TOKEN }}
message
:
"
**New
version
of
polkadot
tagged**:
${{
github.ref
}}<br/>
Gav:
Draft
release
created:
${{
needs.publish-draft-release.outputs.release_url
}}"
message
:
"
**New
version
of
polkadot
tagged**:
${{
github.ref
}}<br/>Draft
release
created:
${{
needs.publish-draft-release.outputs.release_url
}}"
server
:
"
matrix.parity.io"
.github/workflows/release-candidate.yml
View file @
7fc953f3
...
...
@@ -45,7 +45,7 @@ jobs:
if
:
steps.compute_tag.outputs.first_rc == 'true'
env
:
GITHUB_TOKEN
:
${{ secrets.GITHUB_TOKEN }}
BRANCH
:
${{ steps.compute_tag.outputs.version }}
VERSION
:
${{ steps.compute_tag.outputs.version }}
with
:
filename
:
.github/ISSUE_TEMPLATE/release.md
-
uses
:
s3krit/matrix-message-action@v0.0.2
...
...
.gitlab-ci.yml
View file @
7fc953f3
...
...
@@ -227,9 +227,7 @@ generate-impl-guide:
publish-docker
:
<<
:
*publish-build
image
:
docker:stable
services
:
-
docker:dind
image
:
quay.io/buildah/stable
<<
:
*collect-artifacts
# Don't run on releases - this is handled by the Github Action here:
# .github/workflows/publish-docker-release.yml
...
...
@@ -238,26 +236,29 @@ publish-docker:
-
if
:
$CI_PIPELINE_SOURCE == "schedule"
-
if
:
$CI_COMMIT_REF_NAME == "master"
variables
:
DOCKER_HOST
:
tcp://localhost:2375
DOCKER_DRIVER
:
overlay2
GIT_STRATEGY
:
none
# DOCKERFILE: scripts/docker/Dockerfile
CONTAINER_IMAG
E
:
parity/polkadot
IMAGE_NAM
E
:
docker.io/
parity/polkadot
script
:
-
test "$Docker_Hub_User_Parity" -a "$Docker_Hub_Pass_Parity"
|| ( echo "no docker credentials provided"; exit 1 )
-
docker login -u "$Docker_Hub_User_Parity" -p "$Docker_Hub_Pass_Parity"
-
docker info
-
test "$Docker_Hub_User_Parity" -a "$Docker_Hub_Pass_Parity" ||
( echo "no docker credentials provided"; exit 1 )
-
cd ./artifacts
-
docker build
-
buildah bud
--squash
--format=docker
--build-arg VCS_REF="${CI_COMMIT_SHA}"
--build-arg BUILD_DATE="$(date -u '+%Y-%m-%dT%H:%M:%SZ')"
--tag $CONTAINER_IMAGE:$VERSION
--tag $CONTAINER_IMAGE:$EXTRATAG .
-
docker push $CONTAINER_IMAGE:$VERSION
-
docker push $CONTAINER_IMAGE:$EXTRATAG
--tag "$IMAGE_NAME:$VERSION"
--tag "$IMAGE_NAME:$EXTRATAG" .
-
echo "$Docker_Hub_Pass_Parity" |
buildah login --username "$Docker_Hub_User_Parity" --password-stdin docker.io
-
buildah info
-
buildah push
--format=v2s2
"$IMAGE_NAME:$VERSION"
"$IMAGE_NAME:$EXTRATAG"
after_script
:
-
docker logout
-
buildah logout "$IMAGE_NAME"
# only VERSION information is needed for the deployment
-
find ./artifacts/ -depth -not -name VERSION -not -name artifacts -delete
...
...
Cargo.lock
View file @
7fc953f3
This diff is collapsed.
Click to expand it.
Cargo.toml
View file @
7fc953f3
...
...
@@ -6,7 +6,7 @@ path = "src/main.rs"
name
=
"polkadot"
description
=
"Implementation of a https://polkadot.network node in Rust based on the Substrate framework."
license
=
"GPL-3.0-only"
version
=
"0.8.2
6
"
version
=
"0.8.2
7
"
authors
=
[
"Parity Technologies <admin@parity.io>"
]
edition
=
"2018"
readme
=
"README.md"
...
...
RELEASE.md
View file @
7fc953f3
...
...
@@ -3,14 +3,14 @@ Polkadot Release Process
### Branches
*
release-candidate branch: The branch used for staging of the next release.
Named like
`release-v0.8.26`
Named like
`release-v0.8.26`
*
release branch: The branch to which successful release-candidates are merged
and tagged with the new version. Named literally
`release`
.
### Notes
*
The release-candidate branch
*must*
be made in the paritytech/polkadot repo in
order for release automation to work correctly
*
Any new pushes/merges to the release-candidate branch (for example,
*
Any new pushes/merges to the release-candidate branch (for example,
refs/heads/release-v0.8.26) will result in the rc index being bumped (e.g., v0.8.26-rc1
to v0.8.26-rc2) and new wasms built.
...
...
@@ -32,14 +32,25 @@ automated and require no human action.
completed
6.
(optional) If a fix is required to the release-candidate:
1.
Merge the fix with
`master`
first
2.
Checkout the release-candidate branch and merge
`master`
3.
Revert all changes since the creation of the release-candidate that are
**not**
required for the fix.
4.
Push the release-candidate branch to Github - this is now the new release-
2.
Cherry-pick the commit from
`master`
to
`release-v0.8.26`
, fixing any
merge conflicts. Try to avoid unnecessarily bumping crates.
3.
Push the release-candidate branch to Github - this is now the new release-
candidate
4.
Depending on the cherry-picked changes, it may be necessary to perform some
or all of the manual tests again.
7.
Once happy with the release-candidate, perform the release using the release
script located at
`scripts/release.sh`
(or perform the steps in that script
manually):
-
`./scripts/release.sh v0.8.26`
8.
NOACTION: The HEAD of the
`release`
branch will be tagged with
`v0.8.26`
,
and a final release will be created on Github.
\ No newline at end of file
and a final draft release will be created on Github.
### Security releases
Occasionally there may be changes that need to be made to the most recently
released version of Polkadot, without taking
*every*
change to
`master`
since
the last release. For example, in the event of a security vulnerability being
found, where releasing a fixed version is a matter of some expediency. In cases
like this, the fix should first be merged with master, cherry-picked to a branch
forked from
`release`
, tested, and then finally merged with
`release`
. A
sensible versioning scheme for changes like this is
`vX.Y.Z-1`
.
cli/Cargo.toml
View file @
7fc953f3
[package]
name
=
"polkadot-cli"
version
=
"0.8.2
6
"
version
=
"0.8.2
7
"
authors
=
[
"Parity Technologies <admin@parity.io>"
]
description
=
"Polkadot Relay-chain Client Node"
edition
=
"2018"
...
...
cli/src/cli.rs
View file @
7fc953f3
...
...
@@ -91,6 +91,13 @@ pub struct RunCmd {
/// elapsed (i.e. until a block at height `pause_block + delay` is imported).
#[structopt(long
=
"grandpa-pause"
,
number_of_values(
2
))]
pub
grandpa_pause
:
Vec
<
u32
>
,
/// Add the destination address to the jaeger agent.
///
/// Must be valid socket address, of format `IP:Port`
/// commonly `127.0.0.1:6831`.
#[structopt(long)]
pub
jaeger_agent
:
Option
<
std
::
net
::
SocketAddr
>
,
}
#[allow(missing_docs)]
...
...
@@ -98,7 +105,6 @@ pub struct RunCmd {
pub
struct
Cli
{
#[structopt(subcommand)]
pub
subcommand
:
Option
<
Subcommand
>
,
#[structopt(flatten)]
pub
run
:
RunCmd
,
}
cli/src/command.rs
View file @
7fc953f3
...
...
@@ -16,9 +16,32 @@
use
log
::
info
;
use
service
::{
IdentifyVariant
,
self
};
use
sc_cli
::{
SubstrateCli
,
Result
,
RuntimeVersion
,
Role
};
use
sc_cli
::{
SubstrateCli
,
RuntimeVersion
,
Role
};
use
crate
::
cli
::{
Cli
,
Subcommand
};
#[derive(thiserror::Error,
Debug)]
pub
enum
Error
{
#[error(transparent)]
PolkadotService
(
#[from]
service
::
Error
),
#[error(transparent)]
SubstrateCli
(
#[from]
sc_cli
::
Error
),
#[error(transparent)]
SubstrateService
(
#[from]
sc_service
::
Error
),
#[error(
"Other: {0}"
)]
Other
(
String
),
}
impl
std
::
convert
::
From
<
String
>
for
Error
{
fn
from
(
s
:
String
)
->
Self
{
Self
::
Other
(
s
)
}
}
type
Result
<
T
>
=
std
::
result
::
Result
<
T
,
Error
>
;
fn
get_exec_name
()
->
Option
<
String
>
{
std
::
env
::
current_exe
()
.ok
()
...
...
@@ -139,22 +162,29 @@ pub fn run() -> Result<()> {
info!
(
"----------------------------"
);
}
runner
.run_node_until_exit
(|
config
|
async
move
{
let
jaeger_agent
=
cli
.run.jaeger_agent
;
Ok
(
runner
.run_node_until_exit
(
move
|
config
|
async
move
{
let
role
=
config
.role
.clone
();
match
role
{
Role
::
Light
=>
service
::
build_light
(
config
)
.map
(|(
task_manager
,
_
)|
task_manager
),
let
task_manager
=
match
role
{
Role
::
Light
=>
service
::
build_light
(
config
)
.map
(|(
task_manager
,
_
)|
task_manager
)
.map_err
(|
e
|
sc_service
::
Error
::
Other
(
e
.to_string
())),
_
=>
service
::
build_full
(
config
,
service
::
IsCollator
::
No
,
grandpa_pause
,
)
.map
(|
full
|
full
.task_manager
),
}
})
jaeger_agent
,
)
.map
(|
full
|
full
.task_manager
)
.map_err
(|
e
|
sc_service
::
Error
::
Other
(
e
.to_string
())
)
};
task_manager
})
.map_err
(|
e
|
->
sc_cli
::
Error
{
e
.into
()
})
?
)
},
Some
(
Subcommand
::
BuildSpec
(
cmd
))
=>
{
let
runner
=
cli
.create_runner
(
cmd
)
?
;
runner
.sync_run
(|
config
|
cmd
.run
(
config
.chain_spec
,
config
.network
))
Ok
(
runner
.sync_run
(|
config
|
cmd
.run
(
config
.chain_spec
,
config
.network
))
?
)
},
Some
(
Subcommand
::
CheckBlock
(
cmd
))
=>
{
let
runner
=
cli
.create_runner
(
cmd
)
?
;
...
...
@@ -163,7 +193,8 @@ pub fn run() -> Result<()> {
set_default_ss58_version
(
chain_spec
);
runner
.async_run
(|
mut
config
|
{
let
(
client
,
_
,
import_queue
,
task_manager
)
=
service
::
new_chain_ops
(
&
mut
config
)
?
;
let
(
client
,
_
,
import_queue
,
task_manager
)
=
service
::
new_chain_ops
(
&
mut
config
,
None
)
.map_err
(|
e
|
sc_service
::
Error
::
Other
(
e
.to_string
()))
?
;
Ok
((
cmd
.run
(
client
,
import_queue
),
task_manager
))
})
},
...
...
@@ -174,7 +205,8 @@ pub fn run() -> Result<()> {
set_default_ss58_version
(
chain_spec
);
runner
.async_run
(|
mut
config
|
{
let
(
client
,
_
,
_
,
task_manager
)
=
service
::
new_chain_ops
(
&
mut
config
)
?
;
let
(
client
,
_
,
_
,
task_manager
)
=
service
::
new_chain_ops
(
&
mut
config
,
None
)
.map_err
(|
e
|
sc_service
::
Error
::
Other
(
e
.to_string
()))
?
;
Ok
((
cmd
.run
(
client
,
config
.database
),
task_manager
))
})
},
...
...
@@ -185,7 +217,8 @@ pub fn run() -> Result<()> {
set_default_ss58_version
(
chain_spec
);
runner
.async_run
(|
mut
config
|
{
let
(
client
,
_
,
_
,
task_manager
)
=
service
::
new_chain_ops
(
&
mut
config
)
?
;
let
(
client
,
_
,
_
,
task_manager
)
=
service
::
new_chain_ops
(
&
mut
config
,
None
)
.map_err
(|
e
|
sc_service
::
Error
::
Other
(
e
.to_string
()))
?
;
Ok
((
cmd
.run
(
client
,
config
.chain_spec
),
task_manager
))
})
},
...
...
@@ -196,13 +229,15 @@ pub fn run() -> Result<()> {
set_default_ss58_version
(
chain_spec
);
runner
.async_run
(|
mut
config
|
{
let
(
client
,
_
,
import_queue
,
task_manager
)
=
service
::
new_chain_ops
(
&
mut
config
)
?
;
let
(
client
,
_
,
import_queue
,
task_manager
)
=
service
::
new_chain_ops
(
&
mut
config
,
None
)
.map_err
(|
e
|
sc_service
::
Error
::
Other
(
e
.to_string
()))
?
;
Ok
((
cmd
.run
(
client
,
import_queue
),
task_manager
))
})
},
Some
(
Subcommand
::
PurgeChain
(
cmd
))
=>
{
let
runner
=
cli
.create_runner
(
cmd
)
?
;
runner
.sync_run
(|
config
|
cmd
.run
(
config
.database
))
Ok
(
runner
.sync_run
(|
config
|
cmd
.run
(
config
.database
))
.map_err
(|
e
|
sc_service
::
Error
::
Other
(
e
.to_string
()))
?
)
},
Some
(
Subcommand
::
Revert
(
cmd
))
=>
{
let
runner
=
cli
.create_runner
(
cmd
)
?
;
...
...
@@ -211,7 +246,8 @@ pub fn run() -> Result<()> {
set_default_ss58_version
(
chain_spec
);
runner
.async_run
(|
mut
config
|
{
let
(
client
,
backend
,
_
,
task_manager
)
=
service
::
new_chain_ops
(
&
mut
config
)
?
;
let
(
client
,
backend
,
_
,
task_manager
)
=
service
::
new_chain_ops
(
&
mut
config
,
None
)
.map_err
(|
e
|
sc_service
::
Error
::
Other
(
e
.to_string
()))
?
;
Ok
((
cmd
.run
(
client
,
backend
),
task_manager
))
})
},
...
...
@@ -237,5 +273,6 @@ pub fn run() -> Result<()> {
})
},
Some
(
Subcommand
::
Key
(
cmd
))
=>
cmd
.run
(),
}
}
?
;
Ok
(())
}
erasure-coding/Cargo.toml
View file @
7fc953f3
[package]
name
=
"polkadot-erasure-coding"
version
=
"0.8.2
6
"
version
=
"0.8.2
7
"
authors
=
[
"Parity Technologies <admin@parity.io>"
]
edition
=
"2018"
...
...
node/collation-generation/src/lib.rs
View file @
7fc953f3
...
...
@@ -202,7 +202,7 @@ async fn handle_new_activations<Context: SubsystemContext>(
let
availability_cores
=
availability_cores
??
;
let
n_validators
=
validators
??
.len
();
for
core
in
availability_cores
{
for
(
core
_idx
,
core
)
in
availability_cores
.into_iter
()
.enumerate
()
{
let
_availability_core_timer
=
metrics
.time_new_activations_availability_core
();
let
(
scheduled_core
,
assumption
)
=
match
core
{
...
...
@@ -211,12 +211,33 @@ async fn handle_new_activations<Context: SubsystemContext>(
}
CoreState
::
Occupied
(
_occupied_core
)
=>
{
// TODO: https://github.com/paritytech/polkadot/issues/1573
tracing
::
trace!
(
target
:
LOG_TARGET
,
core_idx
=
%
core_idx
,
relay_parent
=
?
relay_parent
,
"core is occupied. Keep going."
,
);
continue
;
}
_
=>
continue
,
CoreState
::
Free
=>
{
tracing
::
trace!
(
target
:
LOG_TARGET
,
core_idx
=
%
core_idx
,
"core is free. Keep going."
,
);
continue
}
};
if
scheduled_core
.para_id
!=
config
.para_id
{
tracing
::
trace!
(
target
:
LOG_TARGET
,
core_idx
=
%
core_idx
,
relay_parent
=
?
relay_parent
,
our_para
=
%
config
.para_id
,
their_para
=
%
scheduled_core
.para_id
,
"core is not assigned to our para. Keep going."
,
);
continue
;
}
...
...
@@ -233,7 +254,17 @@ async fn handle_new_activations<Context: SubsystemContext>(
.await
??
{
Some
(
v
)
=>
v
,
None
=>
continue
,
None
=>
{
tracing
::
trace!
(
target
:
LOG_TARGET
,
core_idx
=
%
core_idx
,
relay_parent
=
?
relay_parent
,
our_para
=
%
config
.para_id
,
their_para
=
%
scheduled_core
.para_id
,
"validation data is not available"
,
);
continue
}
};
let
task_config
=
config
.clone
();
...
...
node/core/av-store/Cargo.toml
View file @
7fc953f3
...
...
@@ -26,7 +26,7 @@ sc-service = { git = "https://github.com/paritytech/substrate", branch = "master
log
=
"0.4.11"
env_logger
=
"0.8.2"
assert_matches
=
"1.4.0"
smallvec
=
"1.5.
0
"
smallvec
=
"1.5.
1
"
kvdb-memorydb
=
"0.7.0"
sp-core
=
{
git
=
"https://github.com/paritytech/substrate"
,
branch
=
"master"
}
...
...
node/core/av-store/src/lib.rs
View file @
7fc953f3
...
...
@@ -44,7 +44,6 @@ use polkadot_node_subsystem_util::metrics::{self, prometheus};
use
polkadot_subsystem
::
messages
::{
AllMessages
,
AvailabilityStoreMessage
,
ChainApiMessage
,
RuntimeApiMessage
,
RuntimeApiRequest
,
};
use
thiserror
::
Error
;
const
LOG_TARGET
:
&
str
=
"availability"
;
...
...
@@ -54,22 +53,32 @@ mod columns {
pub
const
NUM_COLUMNS
:
u32
=
2
;
}
#[derive(Debug,
Error)]
enum
Error
{
#[derive(Debug,
thiserror::Error)]
#[allow(missing_docs)]
pub
enum
Error
{
#[error(transparent)]
RuntimeApi
(
#[from]
RuntimeApiError
),
#[error(transparent)]
ChainApi
(
#[from]
ChainApiError
),
#[error(transparent)]
Erasure
(
#[from]
erasure
::
Error
),
#[error(transparent)]
Io
(
#[from]
io
::
Error
),
#[error(transparent)]
Oneshot
(
#[from]
oneshot
::
Canceled
),
#[error(transparent)]
Subsystem
(
#[from]
SubsystemError
),
#[error(transparent)]
Time
(
#[from]
SystemTimeError
),
#[error(
"Custom databases are not supported"
)]
CustomDatabase
,
}
impl
Error
{
...
...
@@ -418,10 +427,10 @@ pub struct Config {
}
impl
std
::
convert
::
TryFrom
<
sc_service
::
config
::
DatabaseConfig
>
for
Config
{
type
Error
=
&
'static
st
r
;
type
Error
=
Erro
r
;
fn
try_from
(
config
:
sc_service
::
config
::
DatabaseConfig
)
->
Result
<
Self
,
Self
::
Error
>
{
let
path
=
config
.path
()
.ok_or
(
"c
ustom
d
atabase
s are not supported"
)
?
;
let
path
=
config
.path
()
.ok_or
(
Error
::
C
ustom
D
atabase
)
?
;
Ok
(
Self
{
// substrate cache size is improper here; just use the default
...
...
node/core/backing/src/lib.rs
View file @
7fc953f3
...
...
@@ -37,9 +37,10 @@ use polkadot_node_primitives::{
FromTableMisbehavior
,
Statement
,
SignedFullStatement
,
MisbehaviorReport
,
ValidationResult
,
};
use
polkadot_subsystem
::{
jaeger
,
messages
::{
AllMessages
,
AvailabilityStoreMessage
,
CandidateBackingMessage
,
CandidateSelectionMessage
,
CandidateValidationMessage
,
NewBackedCandidate
,
PoVDistributionMessage
,
ProvisionableData
,
CandidateValidationMessage
,
PoVDistributionMessage
,
ProvisionableData
,
ProvisionerMessage
,
StatementDistributionMessage
,
ValidationFailed
,
RuntimeApiRequest
,
},
};
...
...
@@ -74,11 +75,17 @@ enum Error {
#[error(
"Signature is invalid"
)]
InvalidSignature
,
#[error(
"Failed to send candidates {0:?}"
)]
Send
(
Vec
<
NewBackedCandidate
>
),
#[error(
"Oneshot never resolved"
)]
Oneshot
(
#[from]
#[source]
oneshot
::
Canceled
),
Send
(
Vec
<
BackedCandidate
>
),
#[error(
"FetchPoV channel closed before receipt"
)]
FetchPoV
(
#[source]
oneshot
::
Canceled
),
#[error(
"ValidateFromChainState channel closed before receipt"
)]
ValidateFromChainState
(
#[source]
oneshot
::
Canceled
),
#[error(
"StoreAvailableData channel closed before receipt"
)]
StoreAvailableData
(
#[source]
oneshot
::
Canceled
),
#[error(
"a channel was closed before receipt in try_join!"
)]
JoinMultiple
(
#[source]
oneshot
::
Canceled
),
#[error(
"Obtaining erasure chunks failed"
)]
ObtainErasureChunks
(
#[from]
#[source]
erasure_coding
::
Error
),
ObtainErasureChunks
(
#[from]
erasure_coding
::
Error
),
#[error(transparent)]
ValidationFailed
(
#[from]
ValidationFailed
),
#[error(transparent)]
...
...
@@ -124,7 +131,7 @@ struct CandidateBackingJob {
/// Outbound message channel sending part.
tx_from
:
mpsc
::
Sender
<
FromJobCommand
>
,
/// The `ParaId` assigned to this validator
assignment
:
ParaId
,
assignment
:
Option
<
ParaId
>
,
/// The collator required to author the candidate, if any.
required_collator
:
Option
<
CollatorId
>
,
/// We issued `Seconded`, `Valid` or `Invalid` statements on about these candidates.
...
...
@@ -270,7 +277,7 @@ async fn store_available_data(
)
.into
()
)
.await
?
;
let
_
=
rx
.await
?
;
let
_
=
rx
.await
.map_err
(
Error
::
StoreAvailableData
)
?
;
Ok
(())
}
...
...
@@ -328,7 +335,7 @@ async fn request_pov_from_distribution(
PoVDistributionMessage
::
FetchPoV
(
parent
,
descriptor
,
tx
)
)
.into
())
.await
?
;
Ok
(
rx
.await
?
)
rx
.await
.map_err
(
Error
::
FetchPoV
)
}
async
fn
request_candidate_validation
(
...
...
@@ -347,7 +354,11 @@ async fn request_candidate_validation(
)
.into
()
)
.await
?
;
Ok
(
rx
.await
??
)
match
rx
.await
{
Ok
(
Ok
(
validation_result
))
=>
Ok
(
validation_result
),
Ok
(
Err
(
err
))
=>
Err
(
Error
::
ValidationFailed
(
err
)),
Err
(
err
)
=>
Err
(
Error
::
ValidateFromChainState
(
err
)),
}
}
type
BackgroundValidationResult
=
Result
<
(
CandidateReceipt
,
CandidateCommitments
,
Arc
<
PoV
>
),
CandidateReceipt
>
;
...
...
@@ -394,6 +405,12 @@ async fn validate_and_make_available(
ValidationResult
::
Valid
(
commitments
,
validation_data
)
=>
{
// If validation produces a new set of commitments, we vote the candidate as invalid.
if
commitments
.hash
()
!=
expected_commitments_hash
{
tracing
::
trace!
(
target
:
LOG_TARGET
,
candidate_receipt
=
?
candidate
,
actual_commitments
=
?
commitments
,
"Commitments obtained with validation don't match the announced by the candidate receipt"
,
);
Err
(
candidate
)
}
else
{
let
erasure_valid
=
make_pov_available
(
...
...
@@ -408,11 +425,25 @@ async fn validate_and_make_available(
match
erasure_valid
{
Ok
(())
=>
Ok
((
candidate
,
commitments
,
pov
.clone
())),
Err
(
InvalidErasureRoot
)
=>
Err
(
candidate
),
Err
(
InvalidErasureRoot
)
=>
{
tracing
::
trace!
(
target
:
LOG_TARGET
,
candidate_receipt
=
?
candidate
,
actual_commitments
=
?
commitments
,
"Erasure root doesn't match the announced by the candidate receipt"
,
);
Err
(
candidate
)
},
}