Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【Hackathon 5th No.33】 reshape more dtype and test -part #58764

Merged
merged 3 commits into from
Nov 10, 2023

Conversation

megemini
Copy link
Contributor

@megemini megemini commented Nov 7, 2023

PR types

Function optimization

PR changes

APIs

Description

修改 reshape 静态图中数据类型检查,增加支持:

  • 'int8',
  • 'uint8',
  • 'complex64',
  • 'complex128',
  • 'bfloat16',

并增加相应单元测试。

涉及文件:

  • python/paddle/tensor/manipulation.py 修改数据支持类型
  • test/legacy_test/test_reshape_op.py 增加单测

关联 PR: #58323

@paddle-bot paddle-bot bot added the contributor External developers label Nov 7, 2023
@luotao1 luotao1 changed the title [Change] reshape more dtype and test 【Hackathon 5th No.33】 reshape more dtype and test -part Nov 8, 2023
@luotao1 luotao1 self-assigned this Nov 8, 2023
luotao1
luotao1 previously approved these changes Nov 8, 2023
Copy link
Contributor

@luotao1 luotao1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM,请对应修改下 reshape 的中文文档

@@ -3734,7 +3734,7 @@ def reshape(x, shape, name=None):
- 3. Given a 3-D tensor x with a shape [2, 4, 6], and the target shape is [-1, 0, 3, 2], the reshape operator will transform x into a 4-D tensor with shape [2, 4, 3, 2] and leaving x's data unchanged. In this case, besides -1, 0 means the actual dimension value is going to be copied from the corresponding dimension of x.

Args:
x (Tensor): An N-D Tensor. The data type is ``float32``, ``float64``, ``int32``, ``int64`` or ``bool``
x (Tensor): An N-D Tensor. The data type is ``float16``, ``float32``, ``float64``, ``int16``, ``int32``, ``int64``, ``uint16``, ``int8``, ``uint8``, ``complex64``, ``complex128``, ``bfloat16`` or ``bool``.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no official support for uint16, should delete uint16?

Copy link
Contributor Author

@megemini megemini Nov 8, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

paddle use bfloat16 to hold uint16, so no official support for uint16?

In [42]: x = paddle.to_tensor(123, dtype='uint16')

In [43]: x
Out[43]: 
Tensor(shape=[], dtype=bfloat16, place=Place(cpu), stop_gradient=True,
       123.)

In [46]: x = paddle.to_tensor(65535, dtype='uint16')

In [47]: x
Out[47]: 
Tensor(shape=[], dtype=bfloat16, place=Place(cpu), stop_gradient=True,
       65280.)

@@ -3879,6 +3879,11 @@ def get_attr_shape(list_shape):
'int64',
'bool',
'uint16',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same issue of uint16

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jeff41404

numpy not support bfloat16, numpy uint16 is paddle bfloat16, so bfloat16 should be removed from check_variable_and_dtype, and keep uint16:

python/paddle/tensor/manipulation.py

        check_variable_and_dtype(
            x,
            'x',
            [
                'float16',
                'float32',
                'float64',
                'int16',
                'int32',
                'int64',
                'bool',
                'uint16',
                'int8',
                'uint8',
                'complex64',
                'complex128',
            ],
            'reshape',
        )

But remove uint16 & keep bfloat6 in unittest:

test/legacy_test/test_reshape_op.py

    def _test_static_dtype(self):
        places = [paddle.CPUPlace()] + (
            [paddle.CUDAPlace(0)] if base.core.is_compiled_with_cuda() else []
        )

        dtypes = [
            'float16',
            'float32',
            'float64',
            'int16',
            'int32',
            'int64',
            'int8',
            'uint8',
            'complex64',
            'complex128',
            'bfloat16',
            'bool',
        ]
    ...

right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, there is a historical issue with function check_variable_and_dtype, which temporarily use uint16 instead of bfloat16 in its parameters. We will make unified revisions in the future. So keep uint16 as parameter of check_variable_and_dtype in python/paddle/tensor/manipulation.py temporarily and use bfloat16 instead of uint16 in unittest

'int16',
'int32',
'int64',
'uint16',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same issue of uint16

@luotao1
Copy link
Contributor

luotao1 commented Nov 8, 2023

PR-CI-Codestyle-Check 需要过一下

@megemini
Copy link
Contributor Author

megemini commented Nov 8, 2023

PR-CI-Codestyle-Check 需要过一下

@luotao1 我在本地 pre-commit 是过了的 ~ 我看 ci 日志,不像是格式的问题?

@megemini megemini changed the title 【Hackathon 5th No.33】 reshape more dtype and test -part 【Hackathon 5th No.33】 reshape more dtype and test Nov 8, 2023
Copy link
Contributor

@jeff41404 jeff41404 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@luotao1 luotao1 changed the title 【Hackathon 5th No.33】 reshape more dtype and test 【Hackathon 5th No.33】 reshape more dtype and test -part Nov 10, 2023
@luotao1 luotao1 merged commit 1de21c5 into PaddlePaddle:develop Nov 10, 2023
28 checks passed
danleifeng pushed a commit to danleifeng/Paddle that referenced this pull request Nov 14, 2023
…#58764)

* [Change] reshape more dtype and test

* [Fix] remove uint16

* [Change] use uint16 instead of bfloat16
SecretXV pushed a commit to SecretXV/Paddle that referenced this pull request Nov 28, 2023
…#58764)

* [Change] reshape more dtype and test

* [Fix] remove uint16

* [Change] use uint16 instead of bfloat16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants