Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optical flow camera #3938

Merged
merged 3 commits into from
Nov 17, 2021
Merged

Optical flow camera #3938

merged 3 commits into from
Nov 17, 2021

Conversation

saihv
Copy link
Contributor

@saihv saihv commented Aug 8, 2021

About

Adds an optical flow camera that computes pixel velocity through a material/HLSL code. Inspired by carla-simulator/carla#4104, and the discussion at https://github.com/EpicGames/UnrealEngine/pull/6933 (thanks to @finger563 for the discussion and useful code!)

This PR does not make use of UE's native velocity buffer due to issues in its calculation of motion vectors (see https://github.com/EpicGames/UnrealEngine/pull/6933), and hence, can only compute optical flow based on camera motion and not dynamic objects. This is meant as an intermediate step while UE fixes the issues with velocity buffer (more discussion at https://github.com/florian-world/VelocityUE4).

The flow data is exposed in two ways, through two new 'cameras':

  1. OpticalFlowCaptureComponent returns a 2-channel image where the channels correspond to vx and vy respectively. Corresponding material: HUDAssets/OpticalFlowMaterial.
  2. OpticalFlowVisCaptureComponent is the other camera which converts the flow data to RGB for a more 'visual' output. Corresponding material: HUDAssets/OpticalFlowRGBMaterial.

How Has This Been Tested?

The two new cameras can be targeted through their ids 8 and 9; and can be visualized either through subwindows or through the normal Python image request API.

Pending accuracy tests of per-pixel flow.

Screenshots (if appropriate):

opticalflow.mp4

@saihv saihv marked this pull request as draft August 8, 2021 02:33
@saihv saihv marked this pull request as ready for review November 9, 2021 17:42
@jonyMarino
Copy link
Collaborator

@saihv What is the difference between OpticalFlow and OpticalFlowVis?

@saihv
Copy link
Contributor Author

saihv commented Nov 16, 2021

@jonyMarino

OpticalFlow is a 2-channel image where the channels correspond to vx and vy respectively (camera optical flow velocity). This is the data that is the actual raw optical flow. Corresponding material: HUDAssets/OpticalFlowMaterial.
OpticalFlowVis is a modified version that converts the flow data to RGB for a more 'visual' output that is easy to understand. Corresponding material: HUDAssets/OpticalFlowRGBMaterial.

recreate OpticalFlowCaptureComponent & OpticalFlowVisCaptureComponent to create version of BP_PIPCamera merged with master
@zimmy87
Copy link
Contributor

zimmy87 commented Nov 17, 2021

Tested locally and it works for me; thanks for the contribution @saihv!

@zimmy87 zimmy87 merged commit f29f399 into microsoft:master Nov 17, 2021
@alonfaraj
Copy link
Contributor

Hi @saihv,
This is a great feature, thanks!
Maybe it's worth update the documentation , as well as the ros wrapper code.

zimmy87 added a commit that referenced this pull request Dec 7, 2021
This adds an extra section to the image type documentation about the new image types added in #3938
@zimmy87 zimmy87 mentioned this pull request Dec 7, 2021
@KaiXin-Chen
Copy link

Hi,
Thanks for the great functionality. However, there are something that I don't quite understand about it, mind giving me some clue on this?

  1. If we store optical flow in the format of png ( using the argument PixelsAsFloat= false). it doesn't make sense for me of saving optical flow information in the format of uint8 (as optical flow can be negitive and should be a float)

  2. If we store it using .pfm (PixelsAsFloat= true), the pfm file only have one channel, which also doesn't make sense to me.

Mind letting me know what I did wrong? Thanks a lot !

@artths
Copy link

artths commented Apr 21, 2023

Hi, Thanks for the great functionality. However, there are something that I don't quite understand about it, mind giving me some clue on this?

  1. If we store optical flow in the format of png ( using the argument PixelsAsFloat= false). it doesn't make sense for me of saving optical flow information in the format of uint8 (as optical flow can be negitive and should be a float)
  2. If we store it using .pfm (PixelsAsFloat= true), the pfm file only have one channel, which also doesn't make sense to me.

Mind letting me know what I did wrong? Thanks a lot !

It's not about storing. Response itself seem to have only 1 channel. This feature is broken.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants