Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【Hackathon 4 No.17】Add cummax / cummin API to Paddle #53546

Merged
merged 31 commits into from
Jun 13, 2023

Conversation

Patrick-Star125
Copy link
Contributor

@Patrick-Star125 Patrick-Star125 commented May 6, 2023

PR types

New features

PR changes

APIs

Description

Add cummax / cummin API to Paddle

Link

Rfc PR: PaddlePaddle/community#401

待完成

@paddle-bot
Copy link

paddle-bot bot commented May 6, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-ci-bot
Copy link

paddle-ci-bot bot commented May 17, 2023

Sorry to inform you that fe0e522's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

@Patrick-Star125
Copy link
Contributor Author

PR-CI-Windows-OPENBLAS的CI错误没有明显信息,请问这大概是什么问题呢?

@luotao1
Copy link
Contributor

luotao1 commented May 30, 2023

image

PR-CI-Windows-OPENBLAS的CI错误没有明显信息

@Patrick-Star125
Copy link
Contributor Author

修改完毕,麻烦review一下,辛苦了!@luotao1

luotao1
luotao1 previously approved these changes Jun 5, 2023
GGBond8488
GGBond8488 previously approved these changes Jun 5, 2023
Copy link
Contributor

@GGBond8488 GGBond8488 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM


- backward_op : cummin_grad
forward : cummin(Tensor x, int axis=-1, int dtype=3) -> Tensor(out), Tensor(indices)
args : (Tensor x, Tensor indices, Tensor out_grad, int axis, int dtype)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上,kernel也一样

outputs={'out': out, 'indices': indices},
attrs={'axis': axis, 'dtype': dtype},
)
return out, indices
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里不要返回两个值,参考argmax/argmin,与他们的返回保持一致

Copy link
Contributor Author

@Patrick-Star125 Patrick-Star125 Jun 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

argmax/argmin是单返回值的函数,我参考的主要是kthvalue、mode这些返回(out,indices)

意思是indices只在c++端内部使用,python端不用返回是吗?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

需要返回两个值,这样也ok,我的描述有误

@@ -453,6 +453,26 @@
func : cross_grad
data_type : out_grad

- backward_op : cummax_grad
forward : cummax(Tensor x, int axis=-1, int dtype=3) -> Tensor(out), Tensor(indices)
args : (Tensor x, Tensor indices, Tensor out_grad, int axis, int dtype)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这样是没问题的,这里的x可以优化掉吗,似乎只会用到输入的维度

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

删除后在test_check_grad处会报参数数量不匹配,但是我无法通过删除test_check_grad参数解决这个问题

请问删除x是必要的吗,这里x传引用似乎不会有内存额外占用问题

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

看起来是kernel没有同步改?功能上是不必要的,这里主要是能优化掉的话,反向计算的时就少一个对x的引用,可能就能释放掉x

int idx = 0;
for (const auto i : build_range(x_dim_size)) {
if (axis == -2) {
std::cout << "stop point 1" << std::endl;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

需要打日志的话用VLOG

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

while (!finished) {
T1 max = *reinterpret_cast<const T1*>(x_data);
int idx = 0;
for (const auto i : build_range(x_dim_size)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里build_range有必要吗?简单的for (i=0, i<x_dim_size, i++) 似乎也能也能实现相应的功能,也少了构建vector的开销

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

indices_data[i * indices_stride] = idx;
}
if (ndims == 1) break;
for (const auto dim_i : build_range(ndims)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

if (ndims == 1) break;
for (const auto dim_i : build_range(ndims)) {
if (axis == -2) {
std::cout << "stop point 2" << std::endl;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

return cummax, indices


class TestCummaxOp(OpTest):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

单测文件直接放在/test下面吧

Copy link
Contributor Author

@Patrick-Star125 Patrick-Star125 Jun 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

coverage流水线显示代码几乎没有运行,我不确定这是不是把测试代码直接放到/test下面的问题,是不是还有其它配置需要调整

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tianshuo78520a 大佬帮忙看看

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tianshuo78520a @zhangbo9674 看下,API 单测都放到 test/legacy_test 目录下么?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Patrick-Star125 经过讨论,先把单测放到 test/legacy_test 目录下,后续我们会进行统一调整

@Patrick-Star125 Patrick-Star125 dismissed stale reviews from GGBond8488 and luotao1 via 401a944 June 5, 2023 15:50
Copy link
Contributor

@GGBond8488 GGBond8488 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

name (str, optional): Name for the operation (optional, default is None). For more information, please refer to :ref:`api_guide_Name`.

Returns:
out (tuple), Return the result of cummin operation and the corresponding index results. The dtype of cummin result is same with input x.
Copy link
Contributor

@jeff41404 jeff41404 Jun 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

multiple return values write in multiple lines, one per line

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

name (str, optional): Name for the operation (optional, default is None). For more information, please refer to :ref:`api_guide_Name`.

Returns:
out (tuple), Return the result of cummax operation and the corresponding index results. The dtype of cummax result is same with input x.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

multiple return values write in multiple lines, one per line

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

jeff41404
jeff41404 previously approved these changes Jun 12, 2023
Copy link
Contributor

@jeff41404 jeff41404 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Comment on lines 3388 to 3389
out (Tensor), The result of cummax operation. The dtype of cummax result is same with input x.
indices (Tensor), The corresponding index results of cummax operation.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image
没有换行成功

Suggested change
out (Tensor), The result of cummax operation. The dtype of cummax result is same with input x.
indices (Tensor), The corresponding index results of cummax operation.
out (Tensor), The result of cummax operation. The dtype of cummax result is same with input x.
indices (Tensor), The corresponding index results of cummax operation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Comment on lines 3462 to 3463
out (Tensor), The result of cummin operation. The dtype of cummin result is same with input x.
indices (Tensor), The corresponding index results of cummin operation.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
out (Tensor), The result of cummin operation. The dtype of cummin result is same with input x.
indices (Tensor), The corresponding index results of cummin operation.
out (Tensor), The result of cummin operation. The dtype of cummin result is same with input x.
indices (Tensor), The corresponding index results of cummin operation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Copy link
Contributor

@sunzhongkai588 sunzhongkai588 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for docs

@luotao1 luotao1 merged commit 3a3fb1f into PaddlePaddle:develop Jun 13, 2023
@Patrick-Star125 Patrick-Star125 deleted the cummax branch June 18, 2023 07:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants