title: "`AllowHrmpNotificationsFromRelayChain` barrier for HRMP notifications from the relaychain"
doc:
- audience: Runtime Dev
description: |
A new barrier, `AllowHrmpNotificationsFromRelayChain`, has been added.
This barrier can be utilized to ensure that HRMP notifications originate solely from the Relay Chain.
If your runtime relies on these notifications,
you can include it in the runtime's barrier type for `xcm_executor::Config`.
crates:
- name: staging-xcm-builder
bump: minor
# Schema: Polkadot SDK PRDoc Schema (prdoc) v1.0.0
# See doc at https://raw.githubusercontent.com/paritytech/polkadot-sdk/master/prdoc/schema_user.json
title: "Bridge: added free headers submission support to the substrate-relay"
doc:
- audience: Node Dev
description: |
Bridge finality and parachains relayer now supports mode, where it only submits some headers
for free. There's a setting in a runtime configuration, which introduces this "free header"
concept. Submitting such header is considered a common good deed, so it is free for relayers.
crates:
- name: bp-bridge-hub-kusama
bump: major
- name: bp-bridge-hub-polkadot
bump: major
- name: bp-bridge-hub-rococo
bump: major
- name: bp-bridge-hub-westend
bump: major
- name: relay-substrate-client
bump: major
- name: finality-relay
bump: major
- name: substrate-relay-helper
bump: major
- name: parachains-relay
bump: major
# Schema: Polkadot SDK PRDoc Schema (prdoc) v1.0.0
# See doc at https://raw.githubusercontent.com/paritytech/polkadot-sdk/master/prdoc/schema_user.json
title: "Snowbridge: deposit extra fee to beneficiary on Asset Hub"
doc:
- audience: Runtime Dev
description: |
Snowbridge transfers arriving on Asset Hub will deposit both asset and fees to beneficiary so the fees will not get trapped.
Another benefit is when fees left more than ED, could be used to create the beneficiary account in case it does not exist on asset hub.
crates:
- name: snowbridge-router-primitives
title: "wasm-builder: Make it easier to build a WASM binary"
doc:
- audience: [Runtime Dev, Node Dev]
description: |
Combines all the recommended calls of the `WasmBuilder` into
`build_using_defaults()` or `init_with_defaults()` if more changes are required.
Otherwise the interface doesn't change and users can still continue to use
the "old" interface.
crates:
- name: substrate-wasm-builder
title: "polkadot_runtime_parachains::coretime: Expose `MaxXcmTransactWeight`"
doc:
- audience: Runtime Dev
description: |
Expose `MaxXcmTransactWeight` via the `Config` trait. This exposes the
possibility for runtime implementors to set the maximum weight required
for the calls on the coretime chain. Basically it needs to be set to
`max_weight(set_leases, reserve, notify_core_count)` where `set_leases`
etc are the calls on the coretime chain. This ensures that these XCM
transact calls send by the relay chain coretime pallet to the coretime
chain can be dispatched.
crates:
- name: polkadot-runtime-parachains
bump: major
title: "Remove XCM SafeCallFilter for chains using Weights::v3"
doc:
- audience: Runtime User
description: |
`SafeCallFilter` was removed from Rococo and Westend relay and system chains as they
all now use Weights::v3 which already accounts for call PoV size.
This effectively removes artificial limitations on what users can `XCM::Transact` on
these chains (blockspace limitations are still upheld).
crates:
- name: asset-hub-rococo-runtime
bump: minor
- name: asset-hub-westend-runtime
bump: minor
- name: bridge-hub-rococo-runtime
bump: minor
- name: bridge-hub-westend-runtime
bump: minor
- name: collectives-westend-runtime
bump: minor
- name: coretime-rococo-runtime
bump: minor
- name: coretime-westend-runtime
bump: minor
- name: people-rococo-runtime
bump: minor
- name: people-westend-runtime
bump: minor
title: "Treat XCM ExceedsStackLimit errors as transient in the MQ pallet"
doc:
- audience: Runtime User
description: |
Fixes an issue where the MessageQueue can incorrectly assume that a message will permanently fail to process and disallow retrial of it.
crates:
- name: frame-support
bump: major
- name: pallet-message-queue
bump: patch
- name: staging-xcm-builder
bump: patch
- name: staging-xcm-executor
bump: patch
# Schema: Polkadot SDK PRDoc Schema (prdoc) v1.0.0
# See doc at https://raw.githubusercontent.com/paritytech/polkadot-sdk/master/prdoc/schema_user.json
title: Fixed GrandpaConsensusLogReader::find_scheduled_change
doc:
- audience: Runtime Dev
description: |
This PR fixes the issue with authorities set change digest item search
in the bridges code. The issue happens when there are multiple consensus
digest items in the same header digest.
crates:
- name: bp-header-chain
title: "Re-prepare PVF artifacts only if needed"
doc:
- audience: Node Dev
description: |
When a change in the executor environment parameters can not affect the prepared artifact,
it is preserved without recompilation and used for future executions. That mitigates
situations where every unrelated executor parameter change resulted in re-preparing every
artifact on every validator, causing a significant finality lag.
crates:
- name: polkadot-node-core-pvf
bump: minor
- name: polkadot-primitives
bump: minor
title: "[pallet-contracts] stabilize xcm_send and xcm_execute"
doc:
- audience: Runtime Dev
description: |
`xcm_send` and `xcm_execute` are currently marked as unstable. This PR stabilizes them.
crates:
- name: pallet-contracts
bump: major
title: "pallet_broker::start_sales: Take `extra_cores` and not total cores"
doc:
- audience: Runtime User
description: |
Change `pallet_broker::start_sales` to take `extra_cores` and not total cores.
It will calculate the total number of cores to offer based on number of
reservations plus number of leases plus `extra_cores`. Internally it will
also notify the relay chain of the required number of cores.
Thus, starting the first sales with `pallet-broker` requires less brain power ;)
crates:
- name: pallet-broker
bump: minor
title: "Fix Stuck Collator Funds"
doc:
- audience: Runtime Dev
description: |
Fixes stuck collator funds by providing a migration that should have been in PR 1340.
crates:
- name: pallet-collator-selection
bump: patch
title: "Add logic to increase pvf worker based on chain"
doc:
- audience: Node Operator
description: |
A new logic and cli parameters were added to allow increasing the number of pvf
workers based on the chain-id.
crates:
- name: polkadot-node-core-candidate-validation
bump: minor
- name: polkadot-cli
bump: minor
- name: polkadot-service
bump: minor
......@@ -654,7 +654,6 @@ parameter_types! {
pub const SlashDeferDuration: sp_staking::EraIndex = 24 * 7; // 1/4 the bonding duration.
pub const RewardCurve: &'static PiecewiseLinear<'static> = &REWARD_CURVE;
pub const MaxNominators: u32 = 64;
pub const OffendingValidatorsThreshold: Perbill = Perbill::from_percent(17);
pub const MaxControllersInDeprecationBatch: u32 = 5900;
pub OffchainRepeat: BlockNumber = 5;
pub HistoryDepth: u32 = 84;
......@@ -690,7 +689,6 @@ impl pallet_staking::Config for Runtime {
type EraPayout = pallet_staking::ConvertCurve<RewardCurve>;
type NextNewSession = Session;
type MaxExposurePageSize = ConstU32<256>;
type OffendingValidatorsThreshold = OffendingValidatorsThreshold;
type ElectionProvider = ElectionProviderMultiPhase;
type GenesisElectionProvider = onchain::OnChainExecution<OnChainSeqPhragmen>;
type VoterList = VoterList;
......@@ -703,6 +701,7 @@ impl pallet_staking::Config for Runtime {
type EventListeners = NominationPools;
type WeightInfo = pallet_staking::weights::SubstrateWeight<Runtime>;
type BenchmarkingConfig = StakingBenchmarkingConfig;
type DisablingStrategy = pallet_staking::UpToLimitDisablingStrategy;
}
impl pallet_fast_unstake::Config for Runtime {
......
......@@ -16,12 +16,11 @@
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
use crate::keystore::BeefyKeystore;
use codec::{DecodeAll, Encode};
use codec::DecodeAll;
use sp_consensus::Error as ConsensusError;
use sp_consensus_beefy::{
ecdsa_crypto::{AuthorityId, Signature},
ValidatorSet, ValidatorSetId, VersionedFinalityProof,
BeefySignatureHasher, KnownSignature, ValidatorSet, ValidatorSetId, VersionedFinalityProof,
};
use sp_runtime::traits::{Block as BlockT, NumberFor};
......@@ -45,46 +44,31 @@ pub(crate) fn decode_and_verify_finality_proof<Block: BlockT>(
) -> Result<BeefyVersionedFinalityProof<Block>, (ConsensusError, u32)> {
let proof = <BeefyVersionedFinalityProof<Block>>::decode_all(&mut &*encoded)
.map_err(|_| (ConsensusError::InvalidJustification, 0))?;
verify_with_validator_set::<Block>(target_number, validator_set, &proof).map(|_| proof)
verify_with_validator_set::<Block>(target_number, validator_set, &proof)?;
Ok(proof)
}
/// Verify the Beefy finality proof against the validator set at the block it was generated.
pub(crate) fn verify_with_validator_set<Block: BlockT>(
pub(crate) fn verify_with_validator_set<'a, Block: BlockT>(
target_number: NumberFor<Block>,
validator_set: &ValidatorSet<AuthorityId>,
proof: &BeefyVersionedFinalityProof<Block>,
) -> Result<(), (ConsensusError, u32)> {
let mut signatures_checked = 0u32;
validator_set: &'a ValidatorSet<AuthorityId>,
proof: &'a BeefyVersionedFinalityProof<Block>,
) -> Result<Vec<KnownSignature<&'a AuthorityId, &'a Signature>>, (ConsensusError, u32)> {
match proof {
VersionedFinalityProof::V1(signed_commitment) => {
if signed_commitment.signatures.len() != validator_set.len() ||
signed_commitment.commitment.validator_set_id != validator_set.id() ||
signed_commitment.commitment.block_number != target_number
{
return Err((ConsensusError::InvalidJustification, 0))
}
// Arrangement of signatures in the commitment should be in the same order
// as validators for that set.
let message = signed_commitment.commitment.encode();
let valid_signatures = validator_set
.validators()
.into_iter()
.zip(signed_commitment.signatures.iter())
.filter(|(id, signature)| {
signature
.as_ref()
.map(|sig| {
signatures_checked += 1;
BeefyKeystore::verify(*id, sig, &message[..])
})
.unwrap_or(false)
})
.count();
if valid_signatures >= crate::round::threshold(validator_set.len()) {
Ok(())
let signatories = signed_commitment
.verify_signatures::<_, BeefySignatureHasher>(target_number, validator_set)
.map_err(|checked_signatures| {
(ConsensusError::InvalidJustification, checked_signatures)
})?;
if signatories.len() >= crate::round::threshold(validator_set.len()) {
Ok(signatories)
} else {
Err((ConsensusError::InvalidJustification, signatures_checked))
Err((
ConsensusError::InvalidJustification,
signed_commitment.signature_count() as u32,
))
}
},
}
......@@ -92,6 +76,7 @@ pub(crate) fn verify_with_validator_set<Block: BlockT>(
#[cfg(test)]
pub(crate) mod tests {
use codec::Encode;
use sp_consensus_beefy::{
known_payloads, test_utils::Keyring, Commitment, Payload, SignedCommitment,
VersionedFinalityProof,
......
......@@ -86,7 +86,7 @@ where
BasicQueue::new(ManualSealVerifier, block_import, None, spawner, registry)
}
/// Params required to start the instant sealing authorship task.
/// Params required to start the manual sealing authorship task.
pub struct ManualSealParams<B: BlockT, BI, E, C: ProvideRuntimeApi<B>, TP, SC, CS, CIDP, P> {
/// Block import instance.
pub block_import: BI,
......@@ -114,7 +114,7 @@ pub struct ManualSealParams<B: BlockT, BI, E, C: ProvideRuntimeApi<B>, TP, SC, C
pub create_inherent_data_providers: CIDP,
}
/// Params required to start the manual sealing authorship task.
/// Params required to start the instant sealing authorship task.
pub struct InstantSealParams<B: BlockT, BI, E, C: ProvideRuntimeApi<B>, TP, SC, CIDP, P> {
/// Block import instance for well. importing blocks.
pub block_import: BI,
......
......@@ -144,7 +144,6 @@ parameter_types! {
pub const BondingDuration: EraIndex = 3;
pub const SlashDeferDuration: EraIndex = 0;
pub const RewardCurve: &'static PiecewiseLinear<'static> = &REWARD_CURVE;
pub const OffendingValidatorsThreshold: Perbill = Perbill::from_percent(16);
pub static ElectionsBounds: ElectionBounds = ElectionBoundsBuilder::default().build();
}
......@@ -174,7 +173,6 @@ impl pallet_staking::Config for Test {
type UnixTime = pallet_timestamp::Pallet<Test>;
type EraPayout = pallet_staking::ConvertCurve<RewardCurve>;
type MaxExposurePageSize = ConstU32<64>;
type OffendingValidatorsThreshold = OffendingValidatorsThreshold;
type NextNewSession = Session;
type ElectionProvider = onchain::OnChainExecution<OnChainSeqPhragmen>;
type GenesisElectionProvider = Self::ElectionProvider;
......@@ -187,6 +185,7 @@ impl pallet_staking::Config for Test {
type EventListeners = ();
type BenchmarkingConfig = pallet_staking::TestBenchmarkingConfig;
type WeightInfo = ();
type DisablingStrategy = pallet_staking::UpToLimitDisablingStrategy;
}
impl pallet_offences::Config for Test {
......
......@@ -28,6 +28,7 @@ docify = "0.2.8"
[dev-dependencies]
pallet-transaction-payment = { path = "../transaction-payment" }
frame-support = { path = "../support", features = ["experimental"] }
sp-core = { path = "../../primitives/core" }
sp-io = { path = "../../primitives/io" }
paste = "1.0.12"
......
......@@ -954,6 +954,13 @@ pub mod pallet {
if !did_consume && does_consume {
frame_system::Pallet::<T>::inc_consumers(who)?;
}
if does_consume && frame_system::Pallet::<T>::consumers(who) == 0 {
// NOTE: This is a failsafe and should not happen for normal accounts. A normal
// account should have gotten a consumer ref in `!did_consume && does_consume`
// at some point.
log::error!(target: LOG_TARGET, "Defensively bumping a consumer ref.");
frame_system::Pallet::<T>::inc_consumers(who)?;
}
if did_provide && !does_provide {
// This could reap the account so must go last.
frame_system::Pallet::<T>::dec_providers(who).map_err(|r| {
......
// This file is part of Substrate.
// Copyright (C) Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: Apache-2.0
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![cfg(test)]
use crate::{
system::AccountInfo,
tests::{ensure_ti_valid, Balances, ExtBuilder, System, Test, TestId, UseSystem},
AccountData, ExtraFlags, TotalIssuance,
};
use frame_support::{
assert_noop, assert_ok, hypothetically,
traits::{
fungible::{Mutate, MutateHold},
tokens::Precision,
},
};
use sp_runtime::DispatchError;
/// There are some accounts that have one consumer ref too few. These accounts are at risk of losing
/// their held (reserved) balance. They do not just lose it - it is also not accounted for in the
/// Total Issuance. Here we test the case that the account does not reap in such a case, but gets
/// one consumer ref for its reserved balance.
#[test]
fn regression_historic_acc_does_not_evaporate_reserve() {
ExtBuilder::default().build_and_execute_with(|| {
UseSystem::set(true);
let (alice, bob) = (0, 1);
// Alice is in a bad state with consumer == 0 && reserved > 0:
Balances::set_balance(&alice, 100);
TotalIssuance::<Test>::put(100);
ensure_ti_valid();
assert_ok!(Balances::hold(&TestId::Foo, &alice, 10));
// This is the issue of the account:
System::dec_consumers(&alice);
assert_eq!(
System::account(&alice),
AccountInfo {
data: AccountData {
free: 90,
reserved: 10,
frozen: 0,
flags: ExtraFlags(1u128 << 127),
},
nonce: 0,
consumers: 0, // should be 1 on a good acc
providers: 1,
sufficients: 0,
}
);
ensure_ti_valid();
// Reaping the account is prevented by the new logic:
assert_noop!(
Balances::transfer_allow_death(Some(alice).into(), bob, 90),
DispatchError::ConsumerRemaining
);
assert_noop!(
Balances::transfer_all(Some(alice).into(), bob, false),
DispatchError::ConsumerRemaining
);
// normal transfers still work:
hypothetically!({
assert_ok!(Balances::transfer_keep_alive(Some(alice).into(), bob, 40));
// Alice got back her consumer ref:
assert_eq!(System::consumers(&alice), 1);
ensure_ti_valid();
});
hypothetically!({
assert_ok!(Balances::transfer_all(Some(alice).into(), bob, true));
// Alice got back her consumer ref:
assert_eq!(System::consumers(&alice), 1);
ensure_ti_valid();
});
// un-reserving all does not add a consumer ref:
hypothetically!({
assert_ok!(Balances::release(&TestId::Foo, &alice, 10, Precision::Exact));
assert_eq!(System::consumers(&alice), 0);
assert_ok!(Balances::transfer_keep_alive(Some(alice).into(), bob, 40));
assert_eq!(System::consumers(&alice), 0);
ensure_ti_valid();
});
// un-reserving some does add a consumer ref:
hypothetically!({
assert_ok!(Balances::release(&TestId::Foo, &alice, 5, Precision::Exact));
assert_eq!(System::consumers(&alice), 1);
assert_ok!(Balances::transfer_keep_alive(Some(alice).into(), bob, 40));
assert_eq!(System::consumers(&alice), 1);
ensure_ti_valid();
});
});
}