Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix ComputePropagateScalesMkldnnPass of MKLDNN #47574

Merged
merged 3 commits into from
Nov 3, 2022
Merged

Fix ComputePropagateScalesMkldnnPass of MKLDNN #47574

merged 3 commits into from
Nov 3, 2022

Conversation

yeliang2258
Copy link
Contributor

PR types

Bug fixes

PR changes

Others

Describe

Fix ComputePropagateScalesMkldnnPass of MKLDNN

@paddle-bot
Copy link

paddle-bot bot commented Nov 2, 2022

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

Copy link
Contributor

@zh794390558 zh794390558 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@wozna wozna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's great that you found it. LGTM

auto new_pair = std::make_pair(pair.first, tmp_tensor);
var_quant_scales->insert(std::make_pair(input_name, new_pair));
const auto scale = PADDLE_GET_CONST(float, op_node->Op()->GetAttr("scale"));
if (std::abs(scale) < 1e-6 && out_iter != var_quant_scales->end()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you find any example where the scale was so small?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't find so many cases, but when out_iter != var_quant_scales->end(), we need to divide, so add this judgment to avoid crashes

@@ -336,27 +336,46 @@ void ComputePropagateScalesMkldnnPass::ComputeWeightScales(
ComputeLstmWeightScales(graph, scope, "WeightX", "WeightH", var_quant_scales);
}

void ComputePropagateScalesMkldnnPass::UpdateScaleOpInScale(
void ComputePropagateScalesMkldnnPass::UpdateScaleOpInOutScales(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that for a new quantization method where there is linear_quantize before and and linear_dequantize after each operator this UpdateScaleOp it is no longer needed so much.

@jiangjiajun jiangjiajun merged commit 5fc9294 into PaddlePaddle:develop Nov 3, 2022
ZeyuChen pushed a commit that referenced this pull request Nov 3, 2022
* add constant_folding_pass pass for mkldnn int8

* update UpdateScaleOpInOutScales
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants