-
Notifications
You must be signed in to change notification settings - Fork 57
Indexer randomly halts #155
Comments
Yes! Got the same happening! I'm still unsure how this happens. I do use the make postgres, make run_server and make run_indexer separately whereas some make use of make compose. I also may have the issue where the indexer gets overly used for it being utilized on my site. But don't think that should be the case. I checked my logs and this is the only log I could find, happening before a halt that may give some insight to this.
|
Lol wait a sec. Are people trying to get in? I see usernames in the log that don't match my own. Also password authentication failing. Could this cause postgres <=> indexer connection to get interrupted for long enough to time out? |
Lol yeah, I see admin, root, a bunch of other attempts. Hmm. |
I saw that my server (in the config) was serving at 0.0.0.0:30303, changed it to 127.0.0.1:30303. Though unsure if this solves this particular issue. I'm a bit of a noob when it comes to sql or just networking in general. Perhaps you might know better @opsecx? |
Oh I see this also being new in the recent version of namadexer:
|
the server part has nothing to do with how the indexer part runs afaik |
I've now configured my sql server to only allow local connections. I now see which ip address keeps trying to log in. It's coming from China. Sigh.
Did this in the Dunno if this is okay, I'm no expert when it comes to this. |
One final thing, @rllola do you reckon people who implement the indexer without taking any security measures, could become victims to cybercrime dicktwats? I'm not sure what they could do if they logged into the postgres docker. Are there things that could be implemented to make the indexer a bit more secure from the get-go? Or is this really in the hands of the person integrating this? |
Do you mean 'run' the indexer ? Regarding your database logs, it is because you let your database open to the world and there is people (good and bad) masscanning the internet and they will attempt to get in whatever they found (see this defcon talk https://www.youtube.com/watch?v=nX9JXI4l3-E) If they happen to connect they can actually read the data or erase them. Blockchain data are public so no much harm there. If they erase the tables, you will need to resync again... However they "could" access what else is on a server. I believe having it containerized would mitigate it. But as a best practice just don't make your database public (or anything). Also replace the password Regarding the halt now, I don't think it is being related to what you saw in the postgres logs. My guess is that the JSON-RPC request could have been hanging and never finalize, leaving the indexer waiting for an answer. An easy way to find out would be to change the timeout value to a small one and see if it works. |
That's the request between the indexer component and the RPC? Could setting a bigger timeout value make it not "halt"? |
Right! Thank you for the thorough explanation and reference! What I mean are the ones who simply follow the steps to create a namadexer, but don't consider at all to think of security measures (which is pretty silly). I'm now figuring it out by experiencing it, though others could perhaps be saved from this by being made attentive or be linked to a page explaining how to secure their postgres db further. Though yes, it's really a lack of experience with postgres that I stumbled upon this. I like to learn usually by hands on experience, so I'm "glad" this happened :)!
Ah so the new timeout in the Settings.toml file? |
Yes, it is the request between the indexer and the RPC. We actually want it to unstuck and to do so it need to fail fast so we can retry the query. There could be an issue in the side of the node that make it stuck too. It could be entirely something else too. We have been able to reproduce it so I am confident we can find what is happening. |
A good first would be indeed to improve the documentation and to highlight what need to be change for good practice. Even people with experience will disregard cyber security. We could also remove the default value but it will force people to fill them and they might be overwhelmed and discourage to actually run it. In my experience, tutorial and guides (videos or blog post) are the best way to help people start. So if anyone from the community want to create one from their perspective we would greatly appreciate it. |
An update on this issueUnfortunately the fix that I thought would take 2sec to implement is not that trivial. Indeed Fortunately, there is an opened PR to fix it but is not yet merged. Let's see if we can get it merged in the next weeks if not we can just get rid of the this lib and write our own request functions. |
Sounds great! Unfortunately for me, I'm working with the tables that were deleted, so leaves me in a bit of an in-between situation. Any chance of the same data being provided as views or similar? |
This has now been merged informalsystems/tendermint-rs#1379 we are unblocked |
This PR should fix the issue definitively : #168 |
Great work! |
The indexer occasionally halts without much warning. it just gets stuck. requires manual restart after which it processes the remaining blocks fine. Zenode has this issue too.
The text was updated successfully, but these errors were encountered: