-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(rln): move from epoch based gap to timestamp based. #2972
Comments
Makes sense to me. Thanks for creating this. Commented elsewhere, but I think we may have to validate that the |
Some things to take into account for determining the timestamp diff we allow:
|
Is the gossipsub window you mention the same as The RLN semantics require RLN Relay nodes to keep track of all messages at least within the current epoch to detect spam. If the gossipsub cache is too short for this purpose, isn't the simplest solution just make it longer, for example, equal to to the epoch length (10 mintes)? Can we do it? What are the downsides? |
Main downside is that your cache (in memory) will be bigger, leading to a higher memory footprint. And well, if someone deploys a network with an epoch of 1 day, then we need seen_ttl of 1 day? Said footprint ofc depends on the amount of messages being sent over that period. The advantage of that solution is that we don't need to modify the code, just tune a config parameter. One thing that is indeed interesting with this approach is that timestamps don't need to be enforced, which adds some privacy? (eg mitigate timing attacks?) but imho its not about modifying the seen_ttl (which can be done) but about setting a reasonable window size (eg 2 min) and then enforcing it. does it make sense to relay a message that was generated at t0 in t0+9minutes? not sure that offers something valuable, but it opens a bunch of attack vectors. |
It depends on the use case. I don't think it's up for the protocol level to decide. I can imagine, for example, some commit-reveal scheme (a closed-bid auction or something similar) that time-stamps bids but reveals them 9 minutes later.
Can we really enforce timestamps though? And what do you mean by "enforce"? Generally speaking, I think we should define our security assumptions around timestamps. If we assume that most (maybe "most" requires a more precise definition) Relay nodes have their local clocks within, say, 1 second of the real physical time, then we can define the cache size in terms of physical seconds. And even if the attacker tries to manipulate with timestamps, its message will be dropped by most Relay nodes. An analogy: in Bitcoin, timestamps partially determine block validity: a block timestamp must be higher than the median value of the past 11 blocks. Timestamps thus don't have to follow block progression (it might be that We'd have to think carefully then about edge cases. Say, a message is generated at time
A Relay node's actions can be as follows:
All in all, I agree that if we trust honest nodes to be in sync with "real" time anyway, then we can also define timestamp-based validity conditions on messages. |
I'm not sure that the complexity here makes sense. Since epoch sizes can vary wildly (think epoch of a day or longer), it doesn't make sense to include epoch comparison to determine if a message has been sent roughly within the approximately real-time environment of Relay. For example, if we cache messages in the same epoch but early timestamp, it will be easy to attack this cache by simply generating millions of messages with early timestamp but valid epoch. In general, the epoch is useful only as a rate-limiting window, not as real-time check. The |
Yep. no timestamp = reject message + lower score of the sender. timestamp_diff > eg 20 seconds, reject message and lower score of the sender. since the idea is to have the timestamp to be included in the signal (part of the zk proof input) that should cover it. |
But it is still required to compare message epoch with the Relay's node current epoch to detect rate limit violatoins, right?
I'm not sure I understand this. Sorry if I get this discussion side-tracked, but I really want to get to the bottom of this :) From what I understand from the spec:
A hidden assumption in the RLN spec as it stands now it that it uses "current epoch" as a single term, whereas in reality there is a difference between the "current" epoch as defined by:
In summary (please let me know if any of the following is based on a misunderstanding):
|
I don't think so. Afaiu rate limit violations are checked by caching the nullifiers generated within an epoch and ensuring that no double signalling occur. Of course, we should ensure that the caching occurs for all epochs where valid messages (with valid timestamps) could occur, but I think this is done in the implementation already. The point is that the epoch in a message does not have to be the current epoch for the message to be valid.
I don't understand it this way. The message is generated with the "current" epoch as seen by the publisher. Currently, this does not have to match the current epoch of the validator, as long as there's no more than
IIUC a cache of nullifiers within all epochs that could contain messages with valid timestamps
Only the rate-limiting is defined per epoch.
It must be aware of all valid messages from whichever epochs can contain valid messages. Valid message is one with a timestamp within an acceptable window from current time as measured by the validator. You're absolutely right that nullifiers we cache might still cover several epochs to account for boundary conditions, but this is an implementation detail and not related to message validity rules |
Background
We currently validate that an RLN message epoch is +- a given gap (with respect to the current one) determined in epochs, see. This is good since it leaves some margins for clock jitter among nodes, propagation/processing times, etc.
However, when having long epoch sizes (eg several minutes or hours), an old message could be replayed into the future to spam the network. And since the gossipsub window just covers 2 minutes, the network will see this message as new.
To solve this issue, its proposed to:
The text was updated successfully, but these errors were encountered: