Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add (N,C,*) input support for GroupNorm #34773

Merged
merged 3 commits into from
Aug 20, 2021

Conversation

zoooo0820
Copy link
Contributor

PR types

Bug fixes

PR changes

OPs

Describe

The nn.GroupNorm is only support 4-D input now. However, there is no shape check for input tensor. So there are two type problems.

  1. When input dimension is less than 4-D, it may cause errors and other Paddle operations will also crash.
  2. When input dimension is larger than 4-D, the output result is not correct.

What this PR did:

  1. add (N, C, *) (>= 2-D) input support.
  2. change API description in python file.

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@@ -338,8 +338,8 @@ class GroupNorm(Layer):
name(str, optional): Name for the GroupNorm, default is None. For more information, please refer to :ref:`api_guide_Name`..

Shape:
- x: 4-D tensor with shape: (batch, num_features, height, weight).
- output: 4-D tensor with same shape as input x.
- x: Tensor with shape: (N, C, *), where N is batch_size, C is num_features.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里需要说明得更清晰些,比如tensor size的要求,不同的layout对应的shape也应该是不一样的。另外需要更新下fluiddoc下的中文文档

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

目前该API只支持NCHW layout输入,参考相似API,中文文档已修改。

fluid doc预览
gn

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants