-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Stage 4 block verification failed on startup #10665
Comments
Afaict that shouldn't have been an actual crash, you're just getting bad blocks on the network which your node should be ignoring? Kovan has just gone through a hardfork and you are on the right version of parity to continue syncing with the correct side of the chain. How exactly are you running your node? Are you using |
Here are the details of the fork #10628 (comment) and if you still are having trouble, you can always manually use the chainspec you can find in this repo at It appears you are somehow still using the old version of the chain specification (you are expecting an old validator). |
Seems the reason is that a new bootnode is not fully synchronized yet: https://kovan-netstat.poa.network/ The old bootnodes have no new chain spec, so they are not working at the moment: https://kovan-stats.parity.io/ |
@joshua-mir
Currently I tried to delete nodes.json and there is no error message more, but syncing is not continiued
|
@APshenkin it looks like you'll have to wait for more peers or bootnodes running up to date versions to come online. @varasev I've passed on that our bootnodes appear to still be running the old chainspec. That's probably a mistake. (cc @gabreal?) |
Yes, they should have been updated as well, but seems they have an old spec. Also, we had to restart our bootnode right after the fork, for that reason it's still in sync. |
@APshenkin could you repeat once more? Now, our node is fully synchronized. |
@varasev currently no luck(
|
@APshenkin a resync might be unavoidable unfortunately, I understand that might be frustrating with a fat-db archive node. You might be able to rewind with *edit: you don't need to be in beta to try a reset. It was added in 2.3 |
@APshenkin also try with the following CLI-flags: where the |
@joshua-mir Now I recieve other error
|
Generally speaking you should be able to ignore bad block detections - I would try adding an up to date node to your reserved peers as recommended above.. |
Looks like that after db reset there is other issue (not a problem with bad blocks) before reset blocknumber was 10960440 The new block after hardfork is 10960441 that alerts on startup. I run this command: and it successfuly reset to block 10960430 Now when launching (with reserved peers also) parity node takes 10960441 block on startup (I don't know from where, maybe some cache) and try to insert it. But there is no 10960440 block more. Also when running with reserved peers after some time parity starts prints logs about syncing, but it's stuck on 10960430 block
|
@seunlanlege any ideas? |
@APshenkin it was either a reset or resync, unfortunately. |
So, after several hours syncing wasn't started after block reset Looks like block reset feature is not working as expected, or I don't understand why it can't start sync. Currently I had removed
maybe I can provide any detailed logs to help you to fix it? It's not a problem to resync arhive node for kovan (it's not so big) But if somebody will faced similar issues in live network then resync will be painable for them (1-2 month to sync again) |
Pinged Seun above, who implemented the feature and might have a better idea about what's going on. |
hey @APshenkin, the error your facing is a known issue #9910 that plagues the way blocks are imported. For now, I would advise a resync. |
we saw the same, 2.5.0 instances on kovan panicked in that way, upgrading to 2.5.1 and retaining data they also wouldn't proceed past 10960440. Deleting the chain data and restarting our instances all successfully resynced. This was annoying for us as this appeared to side-step our check for their uptodateness |
Closing the issue due to its stale state |
Your issue description goes here below. Try to include actual vs. expected behavior and steps to reproduce the issue.
After docker restarting parity fails to start sync
Is there any option to fix it without full resync?
The text was updated successfully, but these errors were encountered: