Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

p2p: fix data corruption on longer packets #1393

Merged
merged 4 commits into from
Feb 16, 2024
Merged

Conversation

mzabaluev
Copy link
Contributor

@mzabaluev mzabaluev commented Feb 16, 2024

The code handling chunking of data frames longer than the configured
maximum was faulty.

May fix #1392 and possibly other occurrences of data corruption.

  • Referenced an issue explaining the need for the change
  • Updated all relevant documentation in docs
  • Updated all code comments where relevant
  • Wrote tests
  • Added entry in .changelog/

The code handling chunking of data frames longer than the configured
maximum was faulty.
@codecov-commenter
Copy link

codecov-commenter commented Feb 16, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (62ddb98) 58.6% compared to head (4777142) 60.2%.

❗ Current head 4777142 differs from pull request most recent head a9486b3. Consider uploading reports for the commit a9486b3 to get more accurate results

Additional details and impacted files
@@           Coverage Diff           @@
##            main   #1393     +/-   ##
=======================================
+ Coverage   58.6%   60.2%   +1.5%     
=======================================
  Files        273     270      -3     
  Lines      27936   26080   -1856     
=======================================
- Hits       16397   15709    -688     
+ Misses     11539   10371   -1168     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@mzabaluev mzabaluev marked this pull request as ready for review February 16, 2024 16:10
Copy link
Collaborator

@tony-iqlusion tony-iqlusion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Worth a shot at least, although I think ideally the buffer management would be changed so this library allocates the buffer and DATA_MAX_SIZE is just a sanity limit

@mzabaluev mzabaluev merged commit 6160d0b into main Feb 16, 2024
23 checks passed
@mzabaluev mzabaluev deleted the mikhail/p2p-data-corruption branch February 16, 2024 20:06
@mzabaluev
Copy link
Contributor Author

mzabaluev commented Feb 17, 2024

ideally the buffer management would be changed so this library allocates the buffer and DATA_MAX_SIZE is just a sanity limit

The protocol always sends full AEAD frames, and long messages are split between successive frames, with the last frame padded. So, regardless of what the doc comment says, DATA_MAX_SIZE only defines the size of the encrypted frames. In plaintext, the message is length-prefixed.

mzabaluev added a commit that referenced this pull request Feb 21, 2024
* p2p: fix data corruption on longer packets

The code handling chunking of data frames longer than the configured
maximum was faulty.

* Regression test for p2p data corruption
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

p2p: failure to read complete message
4 participants