Support for torch.float16
#255
-
Hi! When using
I get:
Is this expected behavior, i.e. Please excuse me, if this is being mentioned somewhere in the documentation, but a quick search did not reveal anything. Searching issues and discussion for 'float16' also did not yield anything. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @maxfrei750, Half precision floats are not really supported in Mitsuba 3 and Dr.Jit at this point. You will need to use In the future we could look into adding support for |
Beta Was this translation helpful? Give feedback.
Hi @maxfrei750,
Half precision floats are not really supported in Mitsuba 3 and Dr.Jit at this point. You will need to use
float32
on the PyTorch side, or perform the conversion yourself before passing the tensor todr.wrap_ad()
.In the future we could look into adding support for
float16
types in Dr.Jit but this isn't currently on our roadmap.