Skip to content
Snippets Groups Projects
Michal Kucharczyk's avatar
Michal Kucharczyk authored
### Fork-Aware Transaction Pool Implementation

This PR introduces a fork-aware transaction pool (fatxpool) enhancing
transaction management by maintaining the valid state of txpool for
different forks.

### High-level overview
The high level overview was added to
[`sc_transaction_pool::fork_aware_txpool`](https://github.com/paritytech/polkadot-sdk/blob/3ad0a1b7/substrate/client/transaction-pool/src/fork_aware_txpool/mod.rs#L21)
module. Use:
```
cargo  doc --document-private-items -p sc-transaction-pool --open
```
to build the doc. It should give a good overview and nice entry point
into the new pool's mechanics.

<details>
  <summary>Quick overview (documentation excerpt)</summary>

#### View
For every fork, a view is created. The view is a persisted state of the
transaction pool computed and updated at the tip of the fork. The view
is built around the existing `ValidatedPool` structure.

A view is created on every new best block notification. To create a
view, one of the existing views is chosen and cloned.

When the chain progresses, the view is kept in the cache
(`retracted_views`) to allow building blocks upon intermediary blocks in
the fork.

The views are deleted on finalization: views lower than the finalized
block are removed.

The views are updated with the transactions from the mempool—all
transactions are sent to the newly created views.
A maintain process is also executed for the newly created
views—basically resubmitting and pruning transactions from the
appropriate tree route.

##### View store
View store is the helper structure that acts as a container for all the
views. It provides some convenient methods.

##### Submitting transactions
Every transaction is submitted to every view at the tips of the forks.
Retracted views are not updated.
Every transaction also goes into the mempool.

##### Internal mempool
Shortly, the main purpose of an internal mempool is to prevent a
transaction from being lost. That could happen when a transaction is
invalid on one fork and could be valid on another. It also allows the
txpool to accept transactions when no blocks have been reported yet.

The mempool removes its transactions when they get finalized.
Transactions are also periodically verified on every finalized event and
removed from the mempool if no longer valid.

#### Events
Transaction events from multiple views are merged and filtered to avoid
duplicated events.
`Ready` / `Future` / `Inblock` events are originated in the Views and
are de-duplicated and forwarded to external listeners.
`Finalized` events are originated in fork-aware-txpool logic.
`Invalid` events requires special care and can be originated in both
view and fork-aware-txpool logic.

#### Light maintain
Sometime transaction pool does not have enough time to prepare fully
maintained view with all retracted transactions being revalidated. To
avoid providing empty ready transaction set to block builder (what would
result in empty block) the light maintain was implemented. It simply
removes the imported transactions from ready iterator.

#### Revalidation
Revalidation is performed for every view. The revalidation process is
started after a trigger is executed. The revalidation work is terminated
just after a new best block / finalized event is notified to the
transaction pool.
The revalidation result is applied to the newly created view which is
built upon the revalidated view.

Additionally, parts of the mempool are also revalidated to make sure
that no transactions are stuck in the mempool.


#### Logs
The most important log allowing to understand the state of the txpool
is:
```
              maintain: txs:(0, 92) views:[2;[(327, 76, 0), (326, 68, 0)]] event:Finalized { hash: 0x8...f, tree_route: [] }  took:3.463522ms
                             ^   ^         ^     ^   ^  ^      ^   ^  ^        ^                                                   ^
unwatched txs in mempool ────┘   │         │     │   │  │      │   │  │        │                                                   │
   watched txs in mempool ───────┘         │     │   │  │      │   │  │        │                                                   │
                     views  ───────────────┘     │   │  │      │   │  │        │                                                   │
                      1st view block # ──────────┘   │  │      │   │  │        │                                                   │
                           number of ready tx ───────┘  │      │   │  │        │                                                   │
                                numer of future tx ─────┘      │   │  │        │                                                   │
                                        2nd view block # ──────┘   │  │        │                                                   │
                                      number of ready tx ──────────┘  │        │                                                   │
                                           number of future tx ───────┘        │                                                   │
                                                                 event ────────┘                                                   │
                                                                       duration  ──────────────────────────────────────────────────┘
```
It is logged after the maintenance is done.

The `debug` level enables per-transaction logging, allowing to keep
track of all transaction-related actions that happened in txpool.
</details>


### Integration notes

For teams having a custom node, the new txpool needs to be instantiated,
typically in `service.rs` file, here is an example:

https://github.com/paritytech/polkadot-sdk/blob/9c547ff3

/cumulus/polkadot-omni-node/lib/src/common/spec.rs#L152-L161

To enable new transaction pool the following cli arg shall be specified:
`--pool-type=fork-aware`. If it works, there shall be information
printed in the log:
```
2024-09-20 21:28:17.528  INFO main txpool: [Parachain]  creating ForkAware txpool.
````

For debugging the following debugs shall be enabled:
```
      "-lbasic-authorship=debug",
      "-ltxpool=debug",
```
*note:* trace for txpool enables per-transaction logging.

### Future work
The current implementation seems to be stable, however further
improvements are required.
Here is the umbrella issue for future work:
- https://github.com/paritytech/polkadot-sdk/issues/5472


Partially fixes: #1202

---------

Co-authored-by: default avatarBastian Köcher <git@kchr.de>
Co-authored-by: default avatarSebastian Kunert <skunert49@gmail.com>
Co-authored-by: default avatarIulian Barbu <14218860+iulianbarbu@users.noreply.github.com>
26c11fc5

Polkadot

Implementation of a https://polkadot.network node in Rust based on the Substrate framework.

The README provides information about installing the polkadot binary and developing on the codebase. For more specific guides, like how to run a validator node, see the Polkadot Wiki.

Installation

Using a pre-compiled binary

If you just wish to run a Polkadot node without compiling it yourself, you may either:

  • run the latest released binary (make sure to also download all the worker binaries and put them in the same directory as polkadot), or
  • install Polkadot from one of our package repositories.

Debian-based (Debian, Ubuntu)

Currently supports Debian 10 (Buster) and Ubuntu 20.04 (Focal), and derivatives. Run the following commands as the root user.

# Import the security@parity.io GPG key
gpg --recv-keys --keyserver hkps://keys.mailvelope.com 9D4B2B6EB8F97156D19669A9FF0812D491B96798
gpg --export 9D4B2B6EB8F97156D19669A9FF0812D491B96798 > /usr/share/keyrings/parity.gpg
# Add the Parity repository and update the package index
echo 'deb [signed-by=/usr/share/keyrings/parity.gpg] https://releases.parity.io/deb release main' > /etc/apt/sources.list.d/parity.list
apt update
# Install the `parity-keyring` package - This will ensure the GPG key
# used by APT remains up-to-date
apt install parity-keyring
# Install polkadot
apt install polkadot

Installation from the Debian repository will create a systemd service that can be used to run a Polkadot node. This is disabled by default, and can be started by running systemctl start polkadot on demand (use systemctl enable polkadot to make it auto-start after reboot). By default, it will run as the polkadot user. Command-line flags passed to the binary can be customized by editing /etc/default/polkadot. This file will not be overwritten on updating Polkadot. You may also just run the node directly from the command-line.

Building

Since the Polkadot node is based on Substrate, first set up your build environment according to the Substrate installation instructions.

Install via Cargo

Make sure you have the support software installed from the Build from Source section below this section.

If you want to install Polkadot in your PATH, you can do so with:

cargo install --git https://github.com/paritytech/polkadot-sdk --tag <version> polkadot --locked

Build from Source

Build the client by cloning this repository and running the following commands from the root directory of the repo:

git checkout <latest tagged release>
cargo build --release

Note: if you want to move the built polkadot binary somewhere (e.g. into $PATH) you will also need to move polkadot-execute-worker and polkadot-prepare-worker. You can let cargo do all this for you by running:

cargo install --path . --locked

Build from Source with Docker

You can also build from source using Parity CI docker image:

git checkout <latest tagged release>
docker run --rm -it -w /shellhere/polkadot \
                    -v $(pwd):/shellhere/polkadot \
                    paritytech/ci-linux:production cargo build --release
sudo chown -R $(id -u):$(id -g) target/

If you want to reproduce other steps of CI process you can use the following guide.

Networks

This repo supports runtimes for Polkadot, Kusama, and Westend.

Connect to Polkadot Mainnet

Connect to the global Polkadot Mainnet network by running:

../target/release/polkadot --chain=polkadot

You can see your node on Polkadot telemetry (set a custom name with --name "my custom name").

Connect to the "Kusama" Canary Network

Connect to the global Kusama canary network by running:

../target/release/polkadot --chain=kusama

You can see your node on Kusama telemetry (set a custom name with --name "my custom name").

Connect to the Westend Testnet

Connect to the global Westend testnet by running:

../target/release/polkadot --chain=westend

You can see your node on Westend telemetry (set a custom name with --name "my custom name").

Obtaining DOTs

If you want to do anything on Polkadot, Kusama, or Westend, then you'll need to get an account and some DOT, KSM, or WND tokens, respectively. Follow the instructions on the Wiki to obtain tokens for your testnet of choice.

Hacking on Polkadot

If you'd actually like to hack on Polkadot, you can grab the source code and build it. Ensure you have Rust and the support software installed.

Then, grab the Polkadot source code:

git clone https://github.com/paritytech/polkadot-sdk.git
cd polkadot-sdk

Then build the code. You will need to build in release mode (--release) to start a network. Only use debug mode for development (faster compile times for development and testing).

cargo build

You can run the tests if you like:

cargo test --workspace --profile testnet
# Or run only the tests for specified crated
cargo test -p <crate-name> --profile testnet

You can start a development chain with:

cargo run --bin polkadot -- --dev

Detailed logs may be shown by running the node with the following environment variables set:

RUST_LOG=debug RUST_BACKTRACE=1 cargo run --bin polkadot -- --dev

Development

You can run a simple single-node development "network" on your machine by running:

cargo run --bin polkadot --release -- --dev

You can muck around by heading to https://polkadot.js.org/apps and choosing "Local Node" from the Settings menu.

Local Two-node Testnet

If you want to see the multi-node consensus algorithm in action locally, then you can create a local testnet. You'll need two terminals open. In one, run:

polkadot --dev --alice -d /tmp/alice

And in the other, run:

polkadot --dev --bob -d /tmp/bob --bootnodes '/ip4/127.0.0.1/tcp/30333/p2p/ALICE_BOOTNODE_ID_HERE'

Ensure you replace ALICE_BOOTNODE_ID_HERE with the node ID from the output of the first terminal.

Monitoring

Setup Prometheus and Grafana.

Once you set this up you can take a look at the Polkadot Grafana dashboards that we currently maintain.

Using Docker

Using Docker

Shell Completion

Shell Completion

Contributing

Contributing Guidelines

Contribution Guidelines

Contributor Code of Conduct

Code of Conduct

License

Polkadot is GPL 3.0 licensed.