Skip to content

Support for torch.float16 #255

Answered by Speierers
maxfrei750 asked this question in Q&A
Discussion options

You must be logged in to vote

Hi @maxfrei750,

Half precision floats are not really supported in Mitsuba 3 and Dr.Jit at this point. You will need to use float32 on the PyTorch side, or perform the conversion yourself before passing the tensor to dr.wrap_ad().

In the future we could look into adding support for float16 types in Dr.Jit but this isn't currently on our roadmap.

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@maxfrei750
Comment options

Answer selected by maxfrei750
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants