- Jul 08, 2024
-
-
Egor_P authored
This PR backports regular version bumps and prdocs reordering from the 1.14.0 release branch to master
-
- Jul 07, 2024
-
-
Muharem Ismailov authored
Functions `can_decrease` and `can_increase` do not return successful consequence results for assets undergoing destruction; instead, they return the `UnknownAsset` consequence variant. This update aligns their behavior with similar functions, such as `reducible_balance`, `increase_balance`, `decrease_balance`, and `burn`, which return an `AssetNotLive` error for assets in the process of being destroyed.
-
- Jul 06, 2024
-
-
Deepak Chaudhary authored
### ISSUE Link to the issue: https://github.com/paritytech/polkadot-sdk/issues/3326 cc @muraca Deliverables - [Deprecation] remove pallet::getter usage from all pallet-babe ### Test Outcomes ___ Successful tests by running `cargo test -p pallet-babe --features runtime-benchmarks` running 32 tests test mock::__pallet_staking_reward_curve_test_module::reward_curve_piece_count ... ok test mock::__construct_runtime_integrity_test::runtime_integrity_tests ... ok test mock::test_genesis_config_builds ... ok 2024-06-28T17:02:11.158812Z ERROR runtime::storage: Corrupted state at `0x1cb6f36e027abb2091cfb5110ab5087f9aab0a5b63b359512deee557c9f4cf63`: Error { cause: Some(Error { cause: None, desc: "Could not decode `NextConfigDescriptor`, variant doesn't exist" }), desc: "Could not decode `Option::Some(T)`" } 2024-06-28T17:02:11.159752Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 test tests::add_epoch_configurations_migration_works ... ok test tests::author_vrf_output_for_secondary_vrf ... ok test benchmarking::bench_check_equivocation_proof ... ok 2024-06-28T17:02:11.160537Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 test tests::can_estimate_current_epoch_progress ... ok test tests::author_vrf_output_for_primary ... ok test tests::authority_index ... ok 2024-06-28T17:02:11.162327Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 test tests::empty_randomness_is_correct ... ok test tests::check_module ... ok 2024-06-28T17:02:11.163492Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 test tests::current_slot_is_processed_on_initialization ... ok test tests::can_enact_next_config ... ok 2024-06-28T17:02:11.164987Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.165007Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 test tests::can_predict_next_epoch_change ... ok test tests::first_block_epoch_zero_start ... ok test tests::initial_values ... ok 2024-06-28T17:02:11.168430Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.168685Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.170982Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.171220Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 test tests::only_root_can_enact_config_change ... ok test tests::no_author_vrf_output_for_secondary_plain ... ok test tests::can_fetch_current_and_next_epoch_data ... ok 2024-06-28T17:02:11.172960Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 test tests::report_equivocation_has_valid_weight ... ok 2024-06-28T17:02:11.173873Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.177084Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 test tests::report_equivocation_after_skipped_epochs_works ... 2024-06-28T17:02:11.177694Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.177703Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.177925Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.177927Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 ok 2024-06-28T17:02:11.179678Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.181446Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.183665Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.183874Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.185732Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.185951Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.189332Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.189559Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.189587Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 test tests::generate_equivocation_report_blob ... ok test tests::disabled_validators_cannot_author_blocks - should panic ... ok 2024-06-28T17:02:11.190552Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.192279Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.194735Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.196136Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.197240Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 test tests::skipping_over_epochs_works ... ok 2024-06-28T17:02:11.202783Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.202846Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.203029Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.205242Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 test tests::tracks_block_numbers_when_current_and_previous_epoch_started ... ok 2024-06-28T17:02:11.208965Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 test tests::report_equivocation_current_session_works ... ok test tests::report_equivocation_invalid_key_owner_proof ... ok 2024-06-28T17:02:11.216431Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 2024-06-28T17:02:11.216855Z ERROR runtime::timestamp: `pallet_timestamp::UnixTime::now` is called at genesis, invalid value returned: 0 test tests::report_equivocation_validate_unsigned_prevents_duplicates ... ok test tests::report_equivocation_invalid_equivocation_proof ... ok test tests::valid_equivocation_reports_dont_pay_fees ... ok test tests::report_equivocation_old_session_works ... ok test mock::__pallet_staking_reward_curve_test_module::reward_curve_precision ... ok test result: ok. 32 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.20s Doc-tests pallet-babe running 0 tests test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s --- Polkadot Address: 16htXkeVhfroBhL6nuqiwknfXKcT6WadJPZqEi2jRf9z4XPY
-
Deepak Chaudhary authored
### ISSUE Link to the issue: https://github.com/paritytech/polkadot-sdk/issues/3326 cc @muraca Deliverables - [Deprecation] remove pallet::getter usage from pallet-transaction-storage ### Test Outcomes ___ cargo test -p pallet-transaction-storage --features runtime-benchmarks running 9 tests test mock::test_genesis_config_builds ... ok test tests::burns_fee ... ok test mock::__construct_runtime_integrity_test::runtime_integrity_tests ... ok test tests::discards_data ... ok test tests::renews_data ... ok test benchmarking::bench_renew ... ok test benchmarking::bench_store ... ok test tests::checks_proof ... ok test benchmarking::bench_check_proof_max has been running for over 60 seconds test benchmarking::bench_check_proof_max ... ok test result: ok. 9 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 72.57s Doc-tests pallet-transaction-storage running 0 tests test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s --- Polkadot Address: 16htXkeVhfroBhL6nuqiwknfXKcT6WadJPZqEi2jRf9z4XPY
-
Tomás Senovilla Polo authored
Hi! In the course of a talk with @shawntabrizi in Singapore, we realized the documentation related to freeze balances' a little bit confusing. It stated that a frozen amount is released at some specified block number, which isn't true in general. This PR fixes that typo and further specifies that the frozen balance may exceed the available balance, according to what we learned at the PBA. This feature was not specified in the documentation AFAIK. This is the first time I submit something to the polkadot SDK repo, so please feel free to rephrase the docs I added in case I messed up! --------- Co-authored-by:
Shawn Tabrizi <shawntabrizi@gmail.com> Co-authored-by: command-bot <>
-
Sam Johnson authored
Release notes here: https://github.com/sam0x17/macro_magic/releases/tag/v0.5.1 Some performance improvements + upgrades to `derive-syn-parse` 2.0 which means polkadot-sdk now fully upgrades this crate within the workspace
-
- Jul 05, 2024
-
-
Nazar Mokrynskyi authored
This PR largely fixes https://github.com/paritytech/polkadot-sdk/issues/4903 by addressing it from a few different directions. The high-level observation is that complexity of finalization was unfortunately roughly `O(n^3)`. Not only `displaced_leaves_after_finalizing` was extremely inefficient on its own, especially when large ranges of blocks were involved, it was called once upfront and then on every single block that was finalized over and over again. The first commit refactores code adjacent to `displaced_leaves_after_finalizing` to optimize memory allocations. For example things like `BTreeMap<_, Vec<_>>` were very bad in terms of number of allocations and after analyzing code paths was completely unnecessary and replaced with `Vec<(_, _)>`. In other places allocations of known size were not done upfront and some APIs required unnecessary cloning of vectors. I checked invariants and didn't find anything that was violated after refactoring. Second commit completely replaces `displaced_leaves_after_finalizing` implementation with a much more efficient one. In my case with ~82k blocks and ~13k leaves it takes ~5.4s to finish `client.apply_finality()` now. The idea is to avoid querying the same blocks over and over again as well as introducing temporary local cache for blocks related to leaves above block that is being finalized as well as local cache of the finalized branch of the chain. I left some comments in the code and wrote tests that I belive should check all code invariants for correctness. `lowest_common_ancestor_multiblock` was removed as unnecessary and not great in terms of performance API, domain-specific code should be written instead like done in `displaced_leaves_after_finalizing`. After these changes I noticed finalization is still horribly slow, turned out that even though `displaced_leaves_after_finalizing` was way faster that before (probably order of magnitude), it was called for every single of those 82k blocks
The quick hack I came up with in the third commit to handle this edge case was to not call it when finalizing multiple blocks at once until the very last moment. It works and allows to finish the whole finalization in just 14 seconds (5.4+5.4 of which are two calls to `displaced_leaves_after_finalizing`). I'm really not happy with the fact that `displaced_leaves_after_finalizing` is called twice, but much heavier refactoring would be necessary to get rid of second call. --- Next steps: * assuming the changes are acceptable I'll write prdoc * https://github.com/paritytech/polkadot-sdk/pull/4920 or something similar in spirit should be implemented to unleash efficient parallelsm with rayon in `displaced_leaves_after_finalizing`, which will allow to further (and significant!) scale its performance rather that being CPU-bound on a single core, also reading database sequentially should ideally be avoided * someone should look into removal of the second `displaced_leaves_after_finalizing` call * further cleanups are possible if `undo_finalization` can be removed --- Polkadot Address: 1vSxzbyz2cJREAuVWjhXUT1ds8vBzoxn2w4asNpusQKwjJd --------- Co-authored-by:Sebastian Kunert <skunert49@gmail.com>
-
Deepak Chaudhary authored
### ISSUE Link to the issue: https://github.com/paritytech/polkadot-sdk/issues/3326 cc @muraca Deliverables - [Deprecation] remove pallet::getter usage from all pallet-vesting ### Test Outcomes ___ Successful tests by running `cargo test -p pallet-vesting --features runtime-benchmarks` running 45 tests test benchmarking::bench_force_vested_transfer ... ok test benchmarking::bench_vest_other_locked ... ok test mock::__construct_runtime_integrity_test::runtime_integrity_tests ... ok test benchmarking::bench_not_unlocking_merge_schedules ... ok test benchmarking::bench_unlocking_merge_schedules ... ok test mock::test_genesis_config_builds ... ok test tests::build_genesis_has_storage_version_v1 ... ok test tests::check_vesting_status ... ok test benchmarking::bench_force_remove_vesting_schedule ... ok test tests::check_vesting_status_for_multi_schedule_account ... ok test benchmarking::bench_vest_locked ... ok test tests::extra_balance_should_transfer ... ok test tests::generates_multiple_schedules_from_genesis_config ... ok test tests::force_vested_transfer_allows_max_schedules ... ok test tests::force_vested_transfer_correctly_fails ... ok test tests::force_vested_transfer_works ... ok test tests::liquid_funds_should_transfer_with_delayed_vesting ... ok test tests::merge_finished_and_ongoing_schedules ... ok test benchmarking::bench_vest_unlocked ... ok test tests::merge_finished_and_yet_to_be_started_schedules ... ok test tests::merge_finishing_schedules_does_not_create_a_new_one ... ok test tests::merge_ongoing_and_yet_to_be_started_schedules ... ok test benchmarking::bench_vest_other_unlocked ... ok test tests::merge_ongoing_schedules ... ok test tests::merge_schedules_that_have_not_started ... ok test tests::merge_vesting_handles_per_block_0 ... ok test tests::per_block_works ... ok test tests::merge_schedules_throws_proper_errors ... ok test tests::multiple_schedules_from_genesis_config_errors - should panic ... ok test tests::merging_shifts_other_schedules_index ... ok test tests::non_vested_cannot_vest_other ... ok test tests::unvested_balance_should_not_transfer ... ok test tests::non_vested_cannot_vest ... ok test tests::vested_balance_should_transfer ... ok test tests::remove_vesting_schedule ... ok test tests::vested_transfer_correctly_fails ... ok test tests::vested_balance_should_transfer_with_multi_sched ... ok test tests::vested_balance_should_transfer_using_vest_other ... ok test tests::vested_transfer_less_than_existential_deposit_fails ... ok test tests::vesting_info_ending_block_as_balance_works ... ok test tests::vesting_info_validate_works ... ok test tests::vested_balance_should_transfer_using_vest_other_with_multi_sched ... ok test tests::vested_transfer_works ... ok test tests::vested_transfer_allows_max_schedules ... ok test benchmarking::bench_vested_transfer ... ok test result: ok. 45 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.10s --- Polkadot Address: 16htXkeVhfroBhL6nuqiwknfXKcT6WadJPZqEi2jRf9z4XPY
-
Sebastian Kunert authored
Timing issues in container startup have made this test flaky. We now wait for 20 and then register the parachain. This makes sure that the parachain node has the ability to see all relay chain notifications it needs.
-
Alexander Samusev authored
Related to recent discussion. PR makes timeout less strict. cc https://github.com/paritytech/ci_cd/issues/996
-
Sebastian Kunert authored
Part of #3168 On top of #3568 ### Changes Overview - Introduces a new collator variant in `cumulus/client/consensus/aura/src/collators/slot_based/mod.rs` - Two tasks are part of that module, one for block building and one for collation building and submission. - Introduces a new variant of `cumulus-test-runtime` which has 2s slot duration, used for zombienet testing - Zombienet tests for the new collator **Note:** This collator is considered experimental and should only be used for testing and exploration for now. ### Comparison with `lookahead` collator - The new variant is slot based, meaning it waits for the next slot of the parachain, then starts authoring - The search for potential parents remains mostly unchanged from lookahead - As anchor, we use the current best relay parent - In general, the new collator tends to be anchored to one relay parent earlier. `lookahead` generally waits for a new relay block to arrive before it attempts to build a block. This means the actual timing of parachain blocks depends on when the relay block has been authored and imported. With the slot-triggered approach we are authoring directly on the slot boundary, were a new relay chain block has probably not yet arrived. ### Limitations - Overall, the current implementation focuses on the "happy path" - We assume that we want to collate close to the tip of the relay chain. It would be useful however to have some kind of configurable drift, so that we could lag behind a bit. https://github.com/paritytech/polkadot-sdk/issues/3965 - The collation task is pretty dumb currently. It checks if we have cores scheduled and if yes, submits all the messages we have received from the block builder until we have something submitted for every core. Ideally we should do some extra checks, i.e. we do not need to submit if the built block is already too old (build on a out of range relay parent) or was authored with a relay parent that is not an ancestor of the relay block we are submitting at. https://github.com/paritytech/polkadot-sdk/issues/3966 - There is no throttling, we assume that we can submit _velocity_ blocks every relay chain block. There should be communication between the collator task and block-builder task. - The parent search and ConsensusHook are not yet properly adjusted. The parent search makes assumptions about the pending candidate which no longer hold. https://github.com/paritytech/polkadot-sdk/issues/3967 - Custom triggers for block building not implemented. --------- Co-authored-by:
Davide Galassi <davxy@datawok.net> Co-authored-by:
Andrei Sandu <54316454+sandreim@users.noreply.github.com> Co-authored-by:
Bastian Köcher <git@kchr.de> Co-authored-by:
Javier Viola <363911+pepoviola@users.noreply.github.com> Co-authored-by: command-bot <>
-
- Jul 03, 2024
-
-
polka.dom authored
As per #3326, removes pallet::getter macro usage from the pallet-insecure-randomness-collective-flip. The syntax `StorageItem::<T, I>::get()` should be used instead. Explicitly implements the getters that were removed as well, following #223 Also makes the storage values public and converts some syntax to turbo cc @muraca --------- Co-authored-by:
Bastian Köcher <git@kchr.de> Co-authored-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
-
Axay Sagathiya authored
**Backable Candidate**: If a candidate receives enough supporting Statements from the Parachain Validators currently assigned, that candidate is considered backable. **Backed Candidate**: A Backable Candidate noted in a relay-chain block --- When the candidate backing subsystem receives the `GetBackedCandidates` message, it sends back **backable** candidates, not **backed** candidates. So we should rename this message to `GetBackableCandidates` Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Deepak Chaudhary authored
### ISSUE Link to the issue: https://github.com/paritytech/polkadot-sdk/issues/3326 cc @muraca Deliverables - [Deprecation] remove pallet::getter usage from all pallet-tips ### Test Outcomes ___ Successful tests by running `cargo test -p pallet-tips --features runtime-benchmarks` running 26 tests test tests::__construct_runtime_integrity_test::runtime_integrity_tests ... ok test benchmarking::bench_retract_tip ... ok test tests::equal_entries_invariant ... ok test benchmarking::bench_tip ... ok test tests::finders_fee_invariant ... ok test tests::genesis_config_works ... ok test tests::genesis_funding_works ... ok test benchmarking::bench_slash_tip ... ok test tests::reasons_invariant ... ok test benchmarking::bench_report_awesome ... ok test tests::close_tip_works ... ok test tests::report_awesome_from_beneficiary_and_tip_works ... ok test tests::test_genesis_config_builds ... ok test tests::test_last_reward_migration ... ok test benchmarking::bench_tip_new ... ok test benchmarking::bench_close_tip ... ok test tests::test_migration_v4 ... ok test tests::slash_tip_works ... ok test tests::report_awesome_and_tip_works_second_instance ... ok test tests::report_awesome_and_tip_works ... ok test tests::tip_changing_works ... ok test tests::zero_base_deposit_prohibited - should panic ... ok test tests::tip_median_calculation_works ... ok test tests::tip_new_cannot_be_used_twice ... ok test tests::tip_large_should_fail ... ok test tests::retract_tip_works ... ok test result: ok. 26 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.02s Doc-tests pallet_tips running 0 tests test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s --- Polkadot Address: 16htXkeVhfroBhL6nuqiwknfXKcT6WadJPZqEi2jRf9z4XPY --------- Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Ankan authored
Related: https://github.com/paritytech/polkadot-sdk/pull/4804. Fixes the try state error in Westend: https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/6564522. Passes here: https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/6580393 ## Context Currently in Kusama and Polkadot, an account can do both, directly stake, and join a pool. With the migration of pools to `DelegateStake` (See https://github.com/paritytech/polkadot-sdk/pull/3905), the funds of pool members are locked in a different way than for direct stakers. - Pool member funds uses `holds`. - `pallet-staking` uses deprecated locks (analogous to freeze) which can overlap with holds. An existing delegator can stake directly since pallet-staking only uses free balance. But once an account becomes staker, we cannot allow them to be delegator as this risks an account to use already staked (frozen) funds in pools. When an account gets into a situation where it is participating in both pools and staking, it would no longer would be able to add any extra bond to the pool but they can still withdraw funds. ## Changes - Add test for the above scenario. - Removes the assumption that a delegator cannot be a staker.
-
Serban Iorga authored
Related to https://github.com/paritytech/polkadot-sdk/issues/4523 Extracting part of https://github.com/paritytech/polkadot-sdk/pull/1903 (credits to @Lederstrumpf for the high-level strategy), but also introducing significant adjustments both to the approach and to the code. The main adjustment is the fact that the `ForkVotingProof` accepts only one vote, compared to the original version which accepted a `vec![]`. With this approach more calls are needed in order to report multiple equivocated votes on the same commit, but it simplifies a lot the checking logic. We can add support for reporting multiple signatures at once in the future. There are 2 things that are missing in order to consider this issue done, but I would propose to do them in a separate PR since this one is already pretty big: - benchmarks/computing a weight for the new extrinsic (this wasn't present in https://github.com/paritytech/polkadot-sdk/pull/1903 either) - exposing an API for generating the ancestry proof. I'm not sure if we should do this in the Mmr pallet or in the Beefy pallet Co-authored-by:
Robert Hambrock <roberthambrock@gmail.com> --------- Co-authored-by:
Adrian Catangiu <adrian@parity.io>
-
gupnik authored
This PR fixes the unused warnings in `frame-support-procedural` crate, raised by the latest stable rust release.
-
Alexandru Gheorghe authored
With random connectivity and latency is hard to actually figure it out a delta in the benchmarking, so disable them in order to get full deterministic behaviour when measuring performance. At least on my machine with this configuration the results for approval-throughput are really similar between subsequent runs: ``` CPU usage, seconds total per block approval-distribution 36.9025 3.6902 approval-distribution 36.7579 3.6758 approval-distribution 37.0418 3.7042 approval-distribution 37.0339 3.7034 approval-distribution 36.9342 3.6934 approval-distribution 36.7177 3.6718 approval-voting 52.7756 5.2776 approval-voting 52.5999 5.2600 approval-voting 53.2158 5.3216 approval-voting 53.2493 5.3249 approval-voting 52.8524 5.2852 approval-voting 52.8611 5.2861 approval-voting 52.8210 5.2821 ``` --------- Signed-off-by:
Alexandru Gheorghe <alexandru.gheorghe@parity.io>
-
Adrian Catangiu authored
- Send bridged WNDs: Penpal Rococo -> AH Rococo -> AH Westend - Send bridged ROCs: Penpal Westend -> AH Westend -> AH Rococo The tests send both ROCs and WNDs, for each direction the native asset is only used to pay for the transport fees on the local AssetHub, and are not sent over the bridge. Including the native asset won't be necessary anymore once we get #4375. --------- Signed-off-by:
Adrian Catangiu <adrian@parity.io> Co-authored-by: command-bot <>
-
Alexandru Vasile authored
This PR exposes the `RandomKademliaStarted` event from the litep2p network backend, and then increments the appropriate metrics. This is part of: https://github.com/paritytech/polkadot-sdk/issues/4681. However, it is more of an effort to debug low peer count ### Testing Done - Started a node and fetched queries: `substrate_sub_libp2p_kademlia_random_queries_total` produces results for litep2p backend cc @paritytech/networking --------- Signed-off-by:
Alexandru Vasile <alexandru.vasile@parity.io>
-
Sebastian Kunert authored
Recently thought about the special handling we have for asset-hub chains. They started with relay chain consensus and transitioned to AURA at some point. However, nobody should be authoring with relay chain consensus on these chains anymore, the transition is long done. I propose to remove this special handling, allowing us to unify one more execution path.
-
Alexandru Gheorghe authored
CI required markdown step seems to start failing after https://github.com/paritytech/polkadot-sdk/pull/4806 Signed-off-by:
Alexandru Gheorghe <alexandru.gheorghe@parity.io>
-
Kian Paimani authored
Co-authored-by:
Bastian Köcher <git@kchr.de>
-
- Jul 01, 2024
-
-
Kazunobu Ndong authored
Added Instructions for pallet name customisation in the ReadMe --------- Signed-off-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Yuri Volkov authored
Co-authored-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
-
polka.dom authored
As per #3326 , removes pallet::getter macro usage from pallet-membership. The syntax StorageItem::<T, I>::get() should be used instead. Also converts some syntax to turbo and reimplements the removed getters, following #223 cc @muraca --------- Co-authored-by:
Dónal Murray <donalm@seadanda.dev> Co-authored-by:
Kian Paimani <5588131+kianenigma@users.noreply.github.com>
-
- Jun 28, 2024
-
-
Alexandru Vasile authored
Counterpart of: https://github.com/paritytech/polkadot-sdk/pull/4031 cc @paritytech/networking --------- Signed-off-by:
Alexandru Vasile <alexandru.vasile@parity.io> Co-authored-by:
Sebastian Kunert <skunert49@gmail.com>
-
Santi Balaguer authored
This adds the new `SignedExtension` to Coretime Rococo and Coretime Westend runtimes. --------- Co-authored-by:
Dónal Murray <donal.murray@parity.io>
-
Adrian Catangiu authored
On Westend Asset Hub, we allow Rococo Asset Hub to act as reserve for any asset native to the Rococo or Ethereum ecosystems (practically providing Westend access to Ethereum assets through double bridging: W<>R<>Eth). On Rococo Asset Hub, we allow Westend Asset Hub to act as reserve for any asset native to the Westend ecosystem. We also allow Ethereum contracts to act as reserves for the foreign assets identified by the same respective contracts locations. - [x] add emulated tests for various assets (native, trust-based, foreign/bridged) going AHR -> AHW, - [x] add equivalent tests for the other direction AHW -> AHR. This PR is a prerequisite to doing the same for Polkadot<>Kusama bridge.
-
Radha authored
Update the instructions to work with the latest parachain template on Polkadot SDK --------- Co-authored-by:
kianenigma <kian@parity.io> Co-authored-by:
Kian Paimani <5588131+kianenigma@users.noreply.github.com>
-
Serban Iorga authored
Adding `Runtime::OmniNode` variant + small changes --------- Co-authored-by:
kianenigma <kian@parity.io>
-
- Jun 27, 2024
-
-
Niklas Adolfsson authored
Partly fixes https://github.com/paritytech/polkadot-sdk/pull/4890#discussion_r1655548633 Still the offchain API needs to be updated to hyper v1.0 and I opened an issue for it, it's using low-level http body features that have been removed
-
Branislav Kontur authored
Co-authored-by: command-bot <>
-
Lulu authored
Co-authored-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
-
Jun Jiang authored
-
Serban Iorga authored
Ensure that the key ownership proof doesn't contain duplicate or unneeded nodes. We already have these checks for the bridge messages proof. Just making them more generic and performing them also for the key ownership proof. --------- Co-authored-by:
Adrian Catangiu <adrian@parity.io>
-
Bastian Köcher authored
Co-authored-by:
gupnik <mail.guptanikhil@gmail.com>
-
- Jun 26, 2024
-
-
Muharem Ismailov authored
Introduce an optional auto-increment setup for the IDs of new assets. --------- Co-authored-by:
joe petrowski <25483142+joepetrowski@users.noreply.github.com> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Anton Vilhelm Ásgeirsson authored
Enables the `request_revenue` and `notify_revenue` parts of [RFC 5 - Coretime Interface](https://polkadot-fellows.github.io/RFCs/approved/0005-coretime-interface.html) TODO: - [x] Finish first pass at implementation - [x] ~~Need to explicitly burn uncollected and dropped revenue~~ Accumulate it instead - [x] Confirm working on zombienet - [x] Tests - [ ] Enable XCM `request_revenue` sending on Coretime chain on Kusama and Polkadot Fixes: #2209 --------- Co-authored-by:
Dmitry Sinyavin <dmitry.sinyavin@parity.io> Co-authored-by: command-bot <> Co-authored-by:
s0me0ne-unkn0wn <48632512+s0me0ne-unkn0wn@users.noreply.github.com> Co-authored-by:
Dónal Murray <donal.murray@parity.io> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Dmitry Markin authored
This PR upgrades `litep2p` to the latest version and includes the two fixes: 1. Enables incoming DHT record validation with `litep2p` network backend. 2. Sets `TCP_NODELAY` flag on TCP & WS sockets in `litep2p` backend, as it is currently done in `libp2p` backend. --------- Signed-off-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
-