Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【Inplace api】Add copy for inplace #54683

Merged
merged 11 commits into from
Jun 28, 2023

Conversation

GGBond8488
Copy link
Contributor

@GGBond8488 GGBond8488 commented Jun 15, 2023

PR types

Others

PR changes

Others

Description

Pcard-72375
add Input copy for inpalce api backward calculate

Normal api change

Before:

paddle::Tensor pow_ad_func(const paddle::Tensor& x, paddle::experimental::Scalar y) {
...
 // Get Input AutoGradMeta
 egr::AutogradMeta* x_autograd_meta = egr::EagerUtils::nullable_autograd_meta(x);
...
 // Forward API Call
 auto api_result = paddle::experimental::pow(x, y);
...
...
 // Get Outputs
 auto& out = api_result;

 // Get Output AutoGradMeta
 egr::AutogradMeta* out_autograd_meta = egr::EagerUtils::autograd_meta(&out);
 bool trace_backward = egr::Controller::Instance().HasGrad();
 bool require_any_grad = egr::EagerUtils::ComputeRequireGrad(trace_backward,x_autograd_meta);

 // Check Inplace if needed

 // Node Creation
 if(require_any_grad) {
...
   // Node Construction
   auto grad_node = std::shared_ptr<PowGradNode>(new PowGradNode(1, 1));
   // Set for forward trace
 if (FLAGS_check_nan_inf) {
   grad_node->SetForwardTrace(egr::Controller::Instance().GetPythonStack());
 }
   // SetAttributes if needed
   grad_node->SetAttributefactor(factor);
   // Set TensorWrappers for Forward Inputs if needed
   grad_node->SetTensorWrapperx(x);
   // SetGradOutMeta & SetEdges
   grad_node->SetGradOutMeta(x, 0);
   // SetOutRank & SetHistory & SetGradInMeta
   if (out_autograd_meta) {
     egr::EagerUtils::SetOutRankWithSlot(out_autograd_meta, 0);
   }
   if (out_autograd_meta) {
     egr::EagerUtils::SetHistory(out_autograd_meta, grad_node);
   }
   grad_node->SetGradInMeta(out, 0);
   // Set TensorWrappers for Forward Outputs if needed

 }

After this pr merge:

paddle::Tensor pow_ad_func(const paddle::Tensor& x, paddle::experimental::Scalar y) {
...
  // Get Input AutoGradMeta
  egr::AutogradMeta* x_autograd_meta = egr::EagerUtils::nullable_autograd_meta(x);

  bool trace_backward = egr::Controller::Instance().HasGrad();
  bool require_any_grad = egr::EagerUtils::ComputeRequireGrad(trace_backward,x_autograd_meta);

  // Node Declaration
  std::shared_ptr<PowGradNode> grad_node;

  // Set grad_node before API Call
  if(require_any_grad) {
    paddle::platform::RecordEvent node_creation_record_event("pow node_creation", paddle::platform::TracerEventType::OperatorInner, 1);
    // Node Construction
    grad_node = std::shared_ptr<PowGradNode>(new PowGradNode(1, 1));
    // Set for forward trace
  if (FLAGS_check_nan_inf) {
    grad_node->SetForwardTrace(egr::Controller::Instance().GetPythonStack());
  }
    // SetAttributes if needed
    grad_node->SetAttributey(y);
    // Set TensorWrappers for Forward Inputs if needed
    grad_node->SetTensorWrapperx(x);
  }

 // Forward API Call
  auto api_result = paddle::experimental::pow(x, y);
...
  // Get Outputs
  auto& out = api_result;

  // Get Output AutoGradMeta
  egr::AutogradMeta* out_autograd_meta = egr::EagerUtils::autograd_meta(&out);
...
  // Set grad_node after API call
  if(require_any_grad) {
    egr::EagerUtils::PassStopGradient(false,out_autograd_meta);
    // SetGradOutMeta & SetEdges
    grad_node->SetGradOutMeta(x, 0);
    // SetOutRank & SetHistory & SetGradInMeta
    if (out_autograd_meta) {
      egr::EagerUtils::SetOutRankWithSlot(out_autograd_meta, 0);
    }
    if (out_autograd_meta) {
      egr::EagerUtils::SetHistory(out_autograd_meta, grad_node);
    }
    grad_node->SetGradInMeta(out, 0);
    // Set TensorWrappers for Forward Outputs if needed
  }

  VLOG(4) << "Finish AD API: pow";`

Inplace api change

will add Input copy in inplace pow as follow

paddle::Tensor pow__ad_func(const paddle::Tensor& x, paddle::experimental::Scalar y) {
...
  // Set grad_node before API Call
  if(require_any_grad) {
    paddle::platform::RecordEvent node_creation_record_event("pow node_creation", paddle::platform::TracerEventType::OperatorInner, 1);
    // Node Construction
    grad_node = std::shared_ptr<PowGradNode>(new PowGradNode(1, 1));
    // Set for forward trace
  if (FLAGS_check_nan_inf) {
    grad_node->SetForwardTrace(egr::Controller::Instance().GetPythonStack());
  }
    // SetAttributes if needed
    grad_node->SetAttributey(y);
    // Set TensorWrappers for Forward Inputs if needed
    auto x_clone = paddle::experimental::assign(x);
    grad_node->SetTensorWrapperx(x_clone);
  }

 // Forward API Call
  auto api_result = paddle::experimental::pow(x, y);
  // Check NaN and Inf if needed

  // Get Outputs
  auto& out = api_result;
...

@paddle-bot
Copy link

paddle-bot bot commented Jun 15, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

bool trace_backward = egr::Controller::Instance().HasGrad();
bool require_any_grad = egr::EagerUtils::ComputeRequireGrad({});

// Node Declaration
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

生成代码的缩进也需要对齐

// Node Declaration
std::shared_ptr<{}> grad_node;

//Set grad_node before API Call
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

注释后需要一个空格

// Check Inplace if needed
{}{}
// Node Creation
//Set grad_node after API call
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

'y must be scalar or tensor type, but received: %s ' % (type([2])),
):
paddle.pow_(var, [2])

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shall we check gradients of inplace paddle.pow_?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TestInplacePowerScalar(Tensor) inherit from TestDygraphInplace, which contains the backward test

@jeff41404
Copy link
Contributor

in description, before code of pow_ad_func, auto api_result = paddle::experimental::sparse::pow(x, factor); shall be auto api_result = paddle::experimental::pow(x, y); ?

@GGBond8488
Copy link
Contributor Author

in description, before code of pow_ad_func, auto api_result = paddle::experimental::sparse::pow(x, factor); shall be auto api_result = paddle::experimental::pow(x, y); ?

It has been changed to auto api_result = paddle::experimental::pow(x, y);

Copy link
Contributor

@jeff41404 jeff41404 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

sunzhongkai588

This comment was marked as abuse.

def pow_(x, y, name=None):
"""
Inplace version of ``pow`` API, the output Tensor will be inplaced with input ``x``.
Please refer to :ref:`api_tensor_pow`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好像这个标签不正确,导致没有正确引用。试试改成api_paddle_pow,或者在api_label 中加入该标签
image

@jeff41404 jeff41404 merged commit 98debaa into PaddlePaddle:develop Jun 28, 2023
@GGBond8488 GGBond8488 changed the title 【Inplace】Add copy for inplace 【Inplace api】Add copy for inplace Jul 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants