diff --git a/docs/base-chain/node-operators/troubleshooting.mdx b/docs/base-chain/node-operators/troubleshooting.mdx index 9ff8125c8..ce4d60d9b 100644 --- a/docs/base-chain/node-operators/troubleshooting.mdx +++ b/docs/base-chain/node-operators/troubleshooting.mdx @@ -120,6 +120,23 @@ Refer to the [Snapshots](/base-chain/node-operators/snapshots) guide for the cor - **Action**: Stop the conflicting service or change the ports used by the Base node containers by modifying the `ports` section in `docker-compose.yml` and updating the relevant environment variables (`$RPC_PORT`, `$WS_PORT`, etc.) in the `.env` file if necessary. --- +### [​](#flashblocks-issues) Flashblocks Issues + +* **Issue**: `ERROR could not process Flashblock error=missing canonical header for block` + + **Cause**: A race condition between the DB committing the previous canonical block and the arrival of new flashblocks. This is most common right after node restart or snapshot restoration. + + **Action**: This error can be safely ignored if it appears briefly after a restart, it should resolve within a few minutes once the node is fully synced. If it persists, consider adding a small delay in your flashblock processor or upgrading to the latest base-reth version which includes fixes for this race condition. + +* **Issue**: `WARN No pong response from upstream, reconnecting backoff=Xs timeout_ms=500` + + **Cause**: The upstream flashblocks WebSocket (`wss://mainnet.flashblocks.base.org/ws`) is not responding to pings within the configured timeout. The default `timeout_ms=500` acts as both the ping interval and the pong timeout, which can be too aggressive. + + **Action**: Increase the ping timeout value in your configuration. A value of `1000-1500ms` has been reported to resolve this issue by giving the upstream more time to respond. + +* **Issue**: `ERROR Received non-sequential Flashblock for current block` or `ERROR Received non-zero index Flashblock for new block` + + **Cause**: Flashblock messages are being dropped or arriving out of order from the upstream WebSocket, causing the pending state to be zeroed out. + + **Action**: This is typically caused by upstream WebSocket instability. The node will automatically recover. If it happens frequently, check your network connection to the upstream WebSocket endpoint. + +* **Issue**: Pruned snapshot causes `failed to find the L2 Heads` error after download + + **Cause**: The pruned snapshot may be missing transaction data for certain blocks, causing the node to fail finding L2 heads on startup. + + **Action**: Try using the full archive snapshot instead. If disk space is a concern, re-downloading the pruned snapshot may help if the previous one was corrupted, but if the issue persists the snapshot itself may need to be regenerated by the Base team. ## Getting Further Help