-
Notifications
You must be signed in to change notification settings - Fork 615
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix coverity issues 10/23 #5083
Conversation
@@ -351,7 +351,7 @@ void CopyDlTensorBatchGpu(TensorList<GPUBackend> &output, std::vector<DLMTensorP | |||
} | |||
int element_size, ndim; | |||
ValidateBatch(element_size, ndim, dl_tensors, batch_size); | |||
SmallVector<strided_copy::StridedCopyDesc, 128> sample_descs; | |||
SmallVector<strided_copy::StridedCopyDesc, 32> sample_descs; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stack allocation was too large
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, it worked, so I wouldn't say with any amount of certainty that it was "too large". For our target it seems to have been OK.... Your call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Strided copy tests have had some random test failures in the CI for a while. Maybe there are still some issues there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we ran out of stack it wouldn't be "some issues". It would be a hard fault (SIGSEGV? I don't think linux has a separate stack overflow error).
@@ -181,7 +181,7 @@ DALI_DEVICE DALI_FORCEINLINE void AlignedCopy(const StridedCopyDesc &sample, | |||
MismatchedNdimT mismatched_ndim) { | |||
using T = typename ElementTypeDesc::type; | |||
using VecT = typename ElementTypeDesc::vec_type; | |||
constexpr int vec_len = ElementTypeDesc::vec_len; | |||
constexpr int64_t vec_len = ElementTypeDesc::vec_len; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's later used in expressions which could overflow with 32bit types
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indent
try { | ||
auto interpreter_lock = py::gil_scoped_acquire(); | ||
} catch (...) { | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code is broken now. This "lock" is like a lock_guard. It goes out of scope and the lock is released; then we do everything outside of the lock.
Also, I don't think that letting this error just disappear is a good idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't this clear up anyway since this is in the ~DLTensorPythonFunctionImpl
?
I considered suggesting that this should kill the whole thing but I am not sure that it can occur?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this should only throw in some extreme cases. If we cannot access GIL there's probably something more wrong with the process so maybe it's just better to let it terminate
dali/python/backend_impl.cc
Outdated
try { | ||
py::gil_scoped_release interpreter_unlock{}; | ||
} catch (...) { | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's trading a nice error for a deadlock. Please revert.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Broken GIL handling in ~DLTensorPythonFunctionImpl
!build |
CI MESSAGE: [10135281]: BUILD STARTED |
CI MESSAGE: [10135281]: BUILD PASSED |
Signed-off-by: Rafal Banas <rbanas@nvidia.com>
d05e343
to
c9a4387
Compare
Signed-off-by: Rafal Banas <rbanas@nvidia.com>
Category:
Other
Description:
It fixes issues detected by coverity
Additional information:
Affected modules and functionalities:
Key points relevant for the review:
Tests:
Checklist
Documentation
DALI team only
Requirements
REQ IDs: N/A
JIRA TASK: N/A