Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

socket_vmnet gets stuck randomly #39

Open
sheelchand opened this issue Oct 17, 2023 · 6 comments
Open

socket_vmnet gets stuck randomly #39

sheelchand opened this issue Oct 17, 2023 · 6 comments
Labels
bug Something isn't working

Comments

@sheelchand
Copy link

We have 7 qemu VMs running, having 3 virtual ethernet interface each.

socket_vmnet works most of the times but randomly stops working and the communication between the VMs is stopped.

The debug logs show the process get stuck on writev() call.
DEBUG| [Socket-to-Socket i=1815762] Sending from socket 8 to socket 5: 4 + 95 bytes

There is no log after the above log:
On the VM reboot the logs show that writev() call return -1

I suspect this is due to a race condition when multiple threads are accessing the socket to send and receive data. I don't have the exact explanation yet bet the behavior is pointing to a race condition.

@sheelchand
Copy link
Author

More information on the issue:

  1. I mapped each VM to a socket Id and saw that a thread was stuck on writing to socket 11. I stopped the VM corresponding to socket 11 and socket_vmnet crashes out:
    the process was stuck sending to socket 11
    DEBUG| [Socket-to-Socket i=166220] Sending from socket 7 to socket 11: 4 + 46 bytes

Logs after the VM corresponding to socket 11 is stopped:
DEBUG| [Socket-to-Socket i=166220] Sent from socket 7 to socket 11: -1 bytes (including uint32be header)
writev: Broken pipe
DEBUG| [Socket-to-VMNET i=166221] Receiving from the socket 7
DEBUG| [Socket-to-VMNET i=166221] Received from the socket 7: 46 bytes
DEBUG| [Socket-to-VMNET i=166221] Sending to VMNET: 46 bytes
on_accept(): vmnet_return_t VMNET_INVALID_ARGUMENT
vmnet_write: Broken pipe
DEBUG| [Socket-to-Socket i=9903992] Sent from socket 5 to socket 7: -1 bytes (including uint32be header)
writev: Bad file descriptor
DEBUG| [Socket-to-VMNET i=9903993] Receiving from the socket 5
DEBUG| [Socket-to-Socket i=53677] Sent from socket 10 to socket 7: -1 bytes (including uint32be header)
writev: Bad file descriptor
DEBUG| [Socket-to-VMNET i=53678] Receiving from the socket 10
DEBUG| [Socket-to-Socket i=4171484] Sent from socket 9 to socket 7: -1 bytes (including uint32be header)
writev: Bad file descriptor
DEBUG| [Socket-to-VMNET i=4171485] Receiving from the socket 9
DEBUG| [Socket-to-VMNET i=9903993] Received from the socket 5: 1514 bytes
DEBUG| [Socket-to-VMNET i=9903993] Sending to VMNET: 1514 bytes
DEBUG| [Socket-to-Socket i=5600510] Sent from socket 11 to socket 7: -1 bytes (including uint32be header)
writev: Bad file descriptor
DEBUG| [Socket-to-VMNET i=5600511] Receiving from the socket 11
DEBUG| [Socket-to-Socket i=93534327] Sent from socket 6 to socket 7: -1 bytes (including uint32be header)
writev: Bad file descriptor
DEBUG| [Socket-to-VMNET i=93534328] Receiving from the socket 6
on_accept(): vmnet_return_t VMNET_INVALID_ARGUMENT
vmnet_write: Bad file descriptor
DEBUG| [Socket-to-Socket i=68470895] Sent from socket 8 to socket 7: -1 bytes (including uint32be header)
writev: Bad file descriptor
DEBUG| [Socket-to-VMNET i=68470896] Receiving from the socket 8
DEBUG| [Handler i=0] Sent to the socket: -1 bytes (including uint32be header)
writev: Bad file descriptor
DEBUG| [Socket-to-VMNET i=53678] Received from the socket 10: 46 bytes
DEBUG| [Socket-to-VMNET i=53678] Sending to VMNET: 46 bytes
DEBUG| [Socket-to-VMNET i=5600511] Received from the socket 11: 78 bytes
on_accept(): vmnet_return_t VMNET_INVALID_ARGUMENT
vmnet_write: Bad file descriptor
DEBUG| [Socket-to-VMNET i=93534328] Received from the socket 6: 70 bytes
DEBUG| [Socket-to-VMNET i=93534328] Sending to VMNET: 70 bytes
DEBUG| shutting down with rc=1
DEBUG| [Socket-to-VMNET i=4171485] Received from the socket 9: 124 bytes
DEBUG| [Socket-to-VMNET i=4171485] Sending to VMNET: 124 bytes
DEBUG| [Socket-to-VMNET i=5600511] Sending to VMNET: 78 bytes

@sheelchand
Copy link
Author

On further analysis, it looks like a deadlock problem:
Socket 7's buffer is full, other sockets, including socket 11) are blocked sending to it. Socket 7 gets blocked sending to socket 11 because 11's buffer is full - socket 11 and socket 7 are in deadlock.

@sheelchand
Copy link
Author

This code has multiple threads writing to a socket at the same time. This will cause problem - potentially corrupting packets. We need to remove the flooding by mapping MAC addresses to socket-ids. so that 2 threads don't write to a given socket at the same time.

@AkihiroSuda AkihiroSuda added the bug Something isn't working label Oct 24, 2023
@AkihiroSuda
Copy link
Member

@sheelchand Thanks for analysis, would you be interested in submitting a PR?

@sheelchand
Copy link
Author

I tried but the Mac’s in the packets don’t match the VMs Macs. I tried putting a semephore before sending to a socket but that did not help either. I will continue to look at it.

nirs added a commit to nirs/socket_vmnet that referenced this issue Sep 14, 2024
There are 2 race conditions when writing to connections:
- Since we don't hold the semaphore during iteration, a connection can
  be removed by another thread while we try to use it, which will lead
  to use after free.
- Multiple threads may try to write to the same connection socket,
  corrupting the packets (lima-vm#39).

Both issues are fixed by holding the semaphore during iteration and
writing to the socket. This is not the most efficient way but
socket_vmnet crashes daily and we must stop the bleeding first. We can
to add more fine grain locking later.
@nirs
Copy link
Contributor

nirs commented Sep 15, 2024

@sheelchand removing flooding is important for performance but it will not solve the issue of writing to the same socket at the same time from different threads.

Example flow when we send each packet only the destination:

thread1                              thread2          thread3
write packet from conn1 to conn2     -                write packet from conn3 to conn2

writev(2) does not mention anything about thread safety or message size that can be written atomically, so we should assume that this is unsafe.

send(2) seems safe:

 ...The length of the message is given by length.  If the message is too long
 to pass atomically through the underlying protocol, the error EMSGSIZE is
 returned, and the message is not transmitted.

But using send we will have to copy the packet so do one syscall, so writev seems better way. Specially if we find how to send multiple packets per syscall instead of one without sending all packets to all the guests.

nirs added a commit to nirs/socket_vmnet that referenced this issue Sep 16, 2024
There are 2 race conditions when writing to connections:
- Since we don't hold the semaphore during iteration, a connection can
  be removed by another thread while we try to use it, which will lead
  to use after free.
- Multiple threads may try to write to the same connection socket,
  corrupting the packets (lima-vm#39).

Both issues are fixed by holding the semaphore during iteration and
writing to the socket. This is not the most efficient way but
socket_vmnet crashes daily and we must stop the bleeding first. We can
to add more fine grain locking later.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants