Skip to content

Commit

Permalink
update the comments.
Browse files Browse the repository at this point in the history
  • Loading branch information
lcy-seso committed Jan 11, 2018
1 parent e3e9a40 commit 49c20a0
Show file tree
Hide file tree
Showing 4 changed files with 48 additions and 29 deletions.
36 changes: 26 additions & 10 deletions paddle/operators/reorder_lod_tensor_by_rank_op.cc
Original file line number Diff line number Diff line change
Expand Up @@ -31,24 +31,40 @@ class ReorderLoDTensorByRankTableOpProtoMaker
"Input(RankTable).");
AddInput("RankTable",
"(LoDRankTable), the rank table according to which Input(X) is "
"ordered.");
"reordered.");
AddOutput("Out", "(LoDTensor), the reordered lod tensor.");
AddComment(R"DOC(ReorderLoDTensorByRankTable
AddComment(R"DOC(ReorderLoDTensorByRankTable operator.
Reorder the Input(X) according to the information provided by Input(RankTable).
For example, If the indices stored in the Input(RankTable) is [3, 0, 2, 1], the
Input(X) is a batch of sequences. Input(RankTable) stores new orders of the
input sequence batch. The reorder_lod_tensor_by_rank operator reorders the
Input(X) according to the information provided by Input(RankTable).
For example:
If the indices stored in the Input(RankTable) is [3, 0, 2, 1], the
Input(X) will be reordered that the forth sequence in Input(X) will become the
first one, and then followed by the originally first, third, and the second one.
NOTE: This operator sort Input(X) according to a given LoDRankTable which dose
This is:
X = [Seq0, Seq1, Seq2, Seq3]. The indices in RankTable are [3, 0, 2, 1].
Out = [Seq3, Seq0, Seq2, Seq1] with a new LoD information.
If the LoD information of Input(X) is empty, this means Input(X) is not a
sequcence. This is also identical to a batch of sequences, each sequence in
which has a fixed length 1. In this case, the reorder_lod_tensor_by_rank operator
reorders each slice of Input(X) along the first axis according to
Input(RankTable).
This is:
X = [Slice0, Slice1, Slice2, Slice3] and its LoD information is empty. The
indices in RankTable are [3, 0, 2, 1].
Out = [Slice3, Slice0, Slice2, Slice1] with no LoD information is appended.
NOTE: This operator sorts Input(X) according to a given LoDRankTable which dose
not need to be calculated according to Input(X). It can be calculated according
to any other different sequence, and then this operator sort Input(X) according
to other different sequence, and then this operator sorts Input(X) according
to the given LoDRankTable.
For example:
The X = [Seq0, Seq1, Seq2, Seq3]. The indices of RankTable are [3, 0, 2, 1].
The Out = [Seq3, Seq0, Seq2, Seq1] with new LoD information.
)DOC");
}
};
Expand Down
17 changes: 9 additions & 8 deletions paddle/operators/shrink_rnn_memory_op.cc
Original file line number Diff line number Diff line change
Expand Up @@ -77,14 +77,15 @@ class ShrinkRNNMemoryOpProtoMaker : public framework::OpProtoAndCheckerMaker {
"shrinked to match the size of the input of the index'th step.");
AddOutput("Out", "(LoDTensor) The shrinked RNN step memory.");
AddComment(R"DOC(
This operator is used in dynamic RNN, which are able to handle variable-length
sequences. In dynamic RNN, sequences in a mini-batch are sorted by its length
first. After sorting, the longest sequence become the first sequence in the
batch. Because of the multiple lengths, the size of each step input can be
different, which may lead to a mismatching between the input of
the current step and the memory generated by the previous one. This
operator shrinks memory according to the size of the next step input,
to make sure that they can match each other.
This operator is used to shrink output batch of memory defined in dynamic RNN.
Dynamic RNN is able to handle variable-length sequences, in which, sequences in
a mini-batch are sorted by its length first. After that, the longest sequence
becomes the first one in the sorted batch, followed by the second longest, the
third longest, and so on. Dynamic RNN then slices a batch input timestep by
timestep from the sorted input. Once any sequence in the input batch reaches its
end, memory defined in dynamicRNN has to shrink its outputs to adapt to the input
batch size for the next time step.
)DOC");
}
};
Expand Down
17 changes: 9 additions & 8 deletions python/paddle/v2/fluid/layers/control_flow.py
Original file line number Diff line number Diff line change
Expand Up @@ -687,11 +687,10 @@ def topk(input, k):


def lod_tensor_to_array(x, table):
"""This function performs the operation that converts an LOD_Tensor to
an array.
""" Convert a LOD_TENSOR_ARRAY to an TensorArray.
Args:
x (Variable|list): The tensor that needs to be converted to an array.
x (Variable|list): The LoD tensor to be converted to a LoD tensor array.
table (ParamAttr|list): The variable that stores the level of lod
which is ordered by sequence length in
descending order.
Expand Down Expand Up @@ -721,10 +720,10 @@ def lod_tensor_to_array(x, table):


def array_to_lod_tensor(x, table):
"""This function converts an LOD_TENSOR_ARRAY to an LODTensor.
"""Convert a LoD_Tensor_Aarry to an LoDTensor.
Args:
x (Variable|list): The array that needs to be converted to a tensor.
x (Variable|list): The LoD Tensor Array to be converted to a tensor.
table (ParamAttr|list): The variable that stores the level of lod
which is ordered by sequence length in
descending order.
Expand Down Expand Up @@ -752,7 +751,8 @@ def array_to_lod_tensor(x, table):


def increment(x, value=1.0, in_place=True):
"""This function performs an operation that increments each value in the
"""
This function performs an operation that increments each value in the
input :math:`x` by an amount: :math:`value` as mentioned in the input
parameter. This operation is performed in-place by default.
Expand Down Expand Up @@ -785,8 +785,9 @@ def increment(x, value=1.0, in_place=True):


def array_write(x, i, array=None):
"""This function writes the given input variable to the specifict position
which is indicated by the arrary index to an output LOD_TENSOR_ARRAY. If the
"""
This function writes the given input variable to the specified position
indicating by the arrary index to an output LOD_TENSOR_ARRAY. If the
output LOD_TENSOR_ARRAY is not given(None), a new one will be created and
returned.
Expand Down
7 changes: 4 additions & 3 deletions python/paddle/v2/fluid/layers/tensor.py
Original file line number Diff line number Diff line change
Expand Up @@ -146,10 +146,10 @@ def fill_constant(shape, dtype, value, out=None):
"""
**fill_constant**
This function creates a tensor with the specified *shape* and
*dtype*, and initializes it with the constant given by *value*.
This function creates a tensor with the specified `shape` and
`dtype`, and initializes it with the constant specifed by `value`.
The attribute *stop_gradient* of the created tensor is set to True.
The attribute `stop_gradient` of the created tensor is set to True.
Args:
shape(tuple|list|None): Shape of the output tensor.
Expand All @@ -166,6 +166,7 @@ def fill_constant(shape, dtype, value, out=None):
data = fluid.layers.fill_constant(shape=[1], value=0, dtype='int64')
"""

helper = LayerHelper("fill_constant", **locals())
if out is None:
out = helper.create_tmp_variable(dtype=dtype)
Expand Down

0 comments on commit 49c20a0

Please sign in to comment.