Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dynamic recurrent op forward c++ implentation #4597

Merged
merged 38 commits into from
Oct 10, 2017

Conversation

Superjomn
Copy link
Contributor

@Superjomn Superjomn commented Oct 4, 2017

resolves: #4554

namespace operators {

namespace detail {

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you add "namespace detail", you should put this into "paddle/operators/detail" path.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reference https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/tensor_array.cc#L26

I think namespace detail is a special namespace, it is meaningless to put it into a detail directory.


} // namespace detail

class DynamicRecurrentAlgorithmProtoAndCheckerMaker
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not name DynamicRecurrentOpProtoAndCheckerMaker

"names of pre-memories");
AddAttr<std::vector<std::string>>(name.memories, "names of memories");

AddComment("This is a recurrent group operator.");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the difference of recurrent group operator and recurrent operator

@@ -1,6 +1,6 @@
if(WITH_PYTHON)
cc_library(paddle_pybind SHARED
SRCS pybind.cc exception.cc protobuf.cc
DEPS pybind python backward proto_desc tensor_array
DEPS pybind python backward proto_desc tensor_array dynamic_recurrent_op
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does not need a special dependency

namespace paddle {
namespace operators {

using framework::Scope;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two problems:

  1. You may not use a using-directive to make all names from a namespace available.
  2. Do not use Namespace aliases at namespace scope in header files

refs: https://google.github.io/styleguide/cppguide.html#Namespaces

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Scope is not a namespace, so it should be ok.

Google-style just tells thatusing namespace xxx is forbidden.

Copy link
Member

@jacquesqiao jacquesqiao Oct 10, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

using is not recommended to use in header file because it may pollute other files that include it, put into cpp file is Ok.
https://stackoverflow.com/questions/19940081/using-keyword-in-header-implementation-in-c


// call stepnet in all the time steps
for (size_t step = 0; step < cache_.num_steps; step++) {
stepnet_->Run(scope, dev_ctx);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

step is not used?

// TODO(superjom) check all the inputs has the same LoD
int level = 0;
const auto& inlinks = cache_.inlinks;
for (auto& item : inlinks) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for (const auto& item : inlinks) {...}

dy_seq_metas_[item.first] =
ta.Unpack(tensor, level, true /*length_descend*/);

if (cache_.num_steps) {
Copy link
Member

@jacquesqiao jacquesqiao Oct 10, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if (cache_.num_steps == 0UL) {...}

use 0 to inform the type of num_steps

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is ok. 0 equals false is clear.

auto ta_it = step_inputs_.find(item.first);
PADDLE_ENFORCE(ta_it != step_inputs_.end(),
"step_inputs_ not compatible with memory set");
TensorArray& ta = step_inputs_[item.first];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TensorArray& ta = step_inputs_[item.first];

==>

TensorArray& ta = ta_it->second;

Copy link
Member

@jacquesqiao jacquesqiao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! There will be a serial of pr to complete the dynamic recurrent op.

@Superjomn Superjomn merged commit 843ed8e into PaddlePaddle:develop Oct 10, 2017
@Superjomn Superjomn deleted the feature/dynamic-recurrent-op branch October 10, 2017 22:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

variable-length recurrent operator
3 participants