-
Notifications
You must be signed in to change notification settings - Fork 253
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[rfc] modify cummax / cummin API design #531
Conversation
@@ -300,9 +300,8 @@ gpu: | |||
template <typename T, typename Context> | |||
void CummaxKernel(const Context& dev_ctx, | |||
const DenseTensor& x, | |||
const Scalar& axis, | |||
DataType dtype, | |||
bool flatten, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里为什么去掉了flatten呢?cummin同
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cumsum的flatten用于反向算子执行前调整形状,而cummax/cummin的反向不依赖flatten,所以就去掉了
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
嗯,我看见你把flatten操作在python层做了,解决一下代码pr的 windows ci问题吧
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
此pr用于标识算子最新的设计,可能用于算子PR review,后续应该还有更改,暂时不用合入
相关Link: