Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix fetch_op_handle #10454

Merged
merged 5 commits into from
May 9, 2018
Merged

Fix fetch_op_handle #10454

merged 5 commits into from
May 9, 2018

Conversation

chengduoZH
Copy link
Contributor

@chengduoZH chengduoZH commented May 7, 2018

fix #10457

@@ -61,9 +61,14 @@ void FetchOpHandle::RunImpl() {
auto &scope = scopes[i];
auto *var =
scope->FindVar(kLocalExecScopeName)->Get<Scope *>()->FindVar(var_name);
if (var == nullptr) {
scope->FindVar(var_name);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line has no effect

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, right, scope->FindVar(kLocalExecScopeName)->Get<Scope *>() is the subscope of scope.

@@ -49,7 +49,7 @@ void FetchOpHandle::RunImpl() {
platform::DeviceContextPool::Instance().Get(platform::CPUPlace());
for (auto *input : inputs_) {
auto *var = static_cast<VarHandle *>(input);
var->generated_op_->Wait(cpu_ctx);
if (var->generated_op_) var->generated_op_->Wait(cpu_ctx);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if () {
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

panyx0718
panyx0718 previously approved these changes May 8, 2018
Copy link
Contributor

@panyx0718 panyx0718 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

# TODO(zcd): I found that onece the memory optimizer is open,
# parallel_exe doesn't fetch some variable, such as conv2d_0.b_0@GRAD,
# conv2d_1.b_0@GRAD. Those variables should not be pruned.
# fluid.memory_optimize(main)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!
Recently, qingqing reported that se-resnext accuracy improved after memory_optimize is turned off.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this proves that the memory_optimize have some problems.

fetch_list = []
all_vars = main.global_block().vars
for k, v in all_vars.iteritems():
if 'velocity' not in k:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is velocity special?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

velocity also can be fetched, I only think velocity is unnecessary to be fetched.

for data in train_inputs:
ret = pe.run(fetch_list, feed=feeder.feed(data))
for i in range(len(fetch_list)):
print("%s - %s" % (fetch_list[i], np.sum(ret[i])))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's try to avoiding printing a lot in tests.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I will fix this.

panyx0718
panyx0718 previously approved these changes May 8, 2018
@chengduoZH chengduoZH merged commit 8b1918f into PaddlePaddle:develop May 9, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Fetch_op_handle doesn't fetch parameter gradient.
3 participants