Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce double buffered HULA reader #189

Merged
merged 7 commits into from
Apr 20, 2023

Conversation

h3ndrk
Copy link
Member

@h3ndrk h3ndrk commented Apr 15, 2023

Introduced Changes

This PR introduces Unix socket draining when reading from the HULA socket. This is needed because our HULK process connects very early to the Unix socket and HULA/LoLA immediately starts sending state messages to HULK which fills our read buffer. After the connection, we are further initializing the HardwareInterface, e.g. doing camera initialization/resetting. This may take a long time, filling up around 1 second of historic state messages in HULKs read buffer. Once the HardwareInterface has been initialized, the cyclers begin to run and the control cycler will rush to drain the accumulated messages, because the sensor data receiver has a lot of data available to feed the cycle. This results in a lot of messages being sent to HULA/LoLA containing historic (1 second old) control messages. But since HULA/LoLA only updates its shared memory and does not communicate directly with the hardware, this is usually not noticed and it will converge quickly to nearly no latency and drained buffers. This bug was discovered because the new HULA will communicate synchronously with the hardware thus needing this fix.

Changes in HULA are not needed. No worries, there is a lot of test code which accumulates a lot of added lines. 😉

ToDo / Known Issues

None

Ideas for Next Iterations (Not This PR)

None

How to Test

The double buffering should have nearly 100% test coverage, but please test on a real NAO if everything works as intended. The chest button LED blinking frequency should be exactly one second immediately on startup which is a good indicator that everything works as intended.

@h3ndrk h3ndrk force-pushed the double-buffered-hula-reader branch 3 times, most recently from 2c4fa47 to 6aab8e4 Compare April 15, 2023 19:19
@h3ndrk h3ndrk enabled auto-merge April 15, 2023 19:26
@h3ndrk h3ndrk force-pushed the double-buffered-hula-reader branch from 6aab8e4 to 1627f50 Compare April 16, 2023 05:27
@h3ndrk h3ndrk force-pushed the double-buffered-hula-reader branch from 1627f50 to 043fa5d Compare April 16, 2023 05:32
@h3ndrk h3ndrk disabled auto-merge April 16, 2023 07:47
@h3ndrk h3ndrk enabled auto-merge April 16, 2023 07:59
@schmidma schmidma self-assigned this Apr 16, 2023
Copy link
Member

@schmidma schmidma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some general remarks and mostly nit-picking on naming.

One general question:
You are concerned a lot about partial messages and stitching them back together afterwards. Is this actually happening? I assumed the unix socket to have extremely high memory bounds on the internal buffers? Or why are you concerned with partial messages? Please elaborate.

crates/hulk/src/nao/double_buffered_reader.rs Outdated Show resolved Hide resolved
crates/hulk/src/nao/double_buffered_reader.rs Outdated Show resolved Hide resolved
crates/hulk/src/nao/double_buffered_reader.rs Outdated Show resolved Hide resolved
crates/hulk/src/nao/double_buffered_reader.rs Outdated Show resolved Hide resolved
crates/hulk/src/nao/double_buffered_reader.rs Outdated Show resolved Hide resolved
crates/hulk/src/nao/hula.rs Show resolved Hide resolved
crates/hulk/src/nao/hula.rs Outdated Show resolved Hide resolved
crates/hulk/src/nao/double_buffered_reader.rs Outdated Show resolved Hide resolved
crates/hulk/src/nao/double_buffered_reader.rs Outdated Show resolved Hide resolved
crates/hulk/src/nao/double_buffered_reader.rs Show resolved Hide resolved
@h3ndrk
Copy link
Member Author

h3ndrk commented Apr 16, 2023

I assumed the unix socket to have extremely high memory bounds on the internal buffers?

I read that Unix sockets have around 100-200 KB of buffer space. HULA sends messages of 652 bytes which means the buffer is full after around 300 messages which is approximately 3-4 seconds at a rate of 80 Hertz. When the buffer gets full, the likelihood that only a partial message still fits is very high. In my experiments, I saw around 80 KB of buffered messages to be usual during initialization. Hopefully, this explains the message stitching efforts.

@h3ndrk h3ndrk force-pushed the double-buffered-hula-reader branch from a2937b2 to 70d9bf3 Compare April 16, 2023 15:59
@schmidma
Copy link
Member

But did you ever observe partial messages in the socket buffer? And what is HULA doing then? Is it blocking? Crashing?!

@h3ndrk
Copy link
Member Author

h3ndrk commented Apr 17, 2023

But did you ever observe partial messages in the socket buffer?

No, the scenario would happen if HULK pauses reading for 3-4 seconds, then reads faster than HULA being able to write the remaining message. This could happen if the scheduler decides to not schedule HULA for a short moment in time.

And what is HULA doing then? Is it blocking?

Yes

Crashing?!

No, but it will wait for everything to be written until reading more messages from LoLA.

@h3ndrk h3ndrk force-pushed the double-buffered-hula-reader branch from 5da6225 to df9e3b9 Compare April 17, 2023 17:29
@h3ndrk h3ndrk force-pushed the double-buffered-hula-reader branch from 2dcc64e to 11ddcb9 Compare April 18, 2023 18:50
@h3ndrk
Copy link
Member Author

h3ndrk commented Apr 19, 2023

@schmidma Tested successfully on a NAO with legacy HULA.

@h3ndrk h3ndrk added this pull request to the merge queue Apr 20, 2023
Merged via the queue into HULKs:main with commit 3b28dcd Apr 20, 2023
@h3ndrk h3ndrk deleted the double-buffered-hula-reader branch April 20, 2023 09:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

2 participants