Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add python wrapper for l2 normalize layer. #7574

Merged
merged 3 commits into from
Jan 18, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions doc/api/v2/fluid/layers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -499,3 +499,8 @@ swish
------
.. autofunction:: paddle.v2.fluid.layers.swish
:noindex:

l2_normalize
------------
.. autofunction:: paddle.v2.fluid.layers.l2_normalize
:noindex:
4 changes: 2 additions & 2 deletions paddle/operators/clip_op.cc
Original file line number Diff line number Diff line change
Expand Up @@ -51,8 +51,8 @@ class ClipOpMaker : public framework::OpProtoAndCheckerMaker {
AddComment(R"DOC(
Clip Operator.

The clip operator limits the value of given input within an interval. The interval is
specified with arguments 'min' and 'max':
The clip operator limits the value of given input within an interval. The
interval is specified with arguments 'min' and 'max':

$$
Out = \min(\max(X, min), max)
Expand Down
32 changes: 16 additions & 16 deletions paddle/operators/elementwise_op.h
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,9 @@ class ElementwiseOp : public framework::OperatorWithKernel {
using Tensor = framework::Tensor;
void InferShape(framework::InferShapeContext* ctx) const override {
PADDLE_ENFORCE(ctx->HasInput("X"),
"Input(X) of elementwise op should not be null");
"Input(X) of elementwise op should not be null.");
PADDLE_ENFORCE(ctx->HasInput("Y"),
"Input(Y) of elementwise op should not be null");
"Input(Y) of elementwise op should not be null.");
PADDLE_ENFORCE(ctx->HasOutput("Out"),
"Output(Out) of elementwise op should not be null.");

Expand All @@ -45,32 +45,31 @@ class ElementwiseOpMaker : public framework::OpProtoAndCheckerMaker {
public:
ElementwiseOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(Tensor) The first input tensor of elementwise op");
AddInput("Y", "(Tensor) The second input tensor of elementwise op");
AddOutput("Out", "The output of elementwise op");
AddInput("X", "(Tensor), The first input tensor of elementwise op.");
AddInput("Y", "(Tensor), The second input tensor of elementwise op.");
AddOutput("Out", "The output of elementwise op.");
AddAttr<int>("axis",
"(int, default -1) The starting dimension index "
"for broadcasting Y onto X")
"(int, default -1). The start dimension index "
"for broadcasting Y onto X.")
.SetDefault(-1)
.EqualGreaterThan(-1);
comment_ = R"DOC(
Limited Elementwise {name} Operator.

The equation is:

.. math::
{equation}
$${equation}$$

X is a tensor of any dimension and the dimensions of tensor Y must be smaller than
or equal to the dimensions of X.
$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be
smaller than or equal to the dimensions of $X$.

There are two cases for this operator:
1. The shape of Y is same with X;
2. The shape of Y is a subset of X.
1. The shape of $Y$ is same with $X$;
2. The shape of $Y$ is a subset of $X$.

For case 2:
Y will be broadcasted to match the shape of X and axis should be
the starting dimension index for broadcasting Y onto X.
$Y$ will be broadcasted to match the shape of $X$ and axis should be
set to index of the start dimension to broadcast $Y$ onto $X$.

For example
.. code-block:: python
Expand All @@ -81,7 +80,8 @@ For example
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0

Either of the inputs X and Y or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input X.
Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details)
information. However, the output only shares the LoD information with input $X$.

)DOC";
AddComment(comment_);
Expand Down
18 changes: 9 additions & 9 deletions paddle/operators/expand_op.cc
Original file line number Diff line number Diff line change
Expand Up @@ -58,21 +58,21 @@ class ExpandOpMaker : public framework::OpProtoAndCheckerMaker {
ExpandOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X",
"(Tensor, default Tensor<float>) A tensor with rank in [1, 6]."
"X is the input tensor to be expanded.");
"(Tensor, default Tensor<float>). A tensor with rank in [1, 6]."
"X is the input to be expanded.");
AddOutput("Out",
"(Tensor, default Tensor<float>) A tensor with rank in [1, 6]."
"The rank of Output(Out) is same as Input(X) except that each "
"dimension size of Output(Out) is equal to corresponding "
"dimension size of Input(X) multiplying corresponding value of "
"Attr(expand_times).");
"(Tensor, default Tensor<float>). A tensor with rank in [1, 6]."
"The rank of Output(Out) have the same with Input(X). "
"After expanding, size of each dimension of Output(Out) is equal "
"to size of the corresponding dimension of Input(X) multiplying "
"the corresponding value given by Attr(expand_times).");
AddAttr<std::vector<int>>("expand_times",
"Expand times number for each dimension.");
AddComment(R"DOC(
Expand operator tiles the input by given times number. You should set times
number for each dimension by providing attribute 'expand_times'. The rank of X
should be in [1, 6]. Please notice that size of 'expand_times' must be same with
X's rank. Following is a using case:
should be in [1, 6]. Please note that size of 'expand_times' must be the same
with X's rank. Following is a using case:

Input(X) is a 3-D tensor with shape [2, 3, 1]:

Expand Down
23 changes: 16 additions & 7 deletions python/paddle/trainer_config_helpers/evaluators.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,22 @@
from default_decorators import *

__all__ = [
"evaluator_base", "classification_error_evaluator", "auc_evaluator",
"pnpair_evaluator", "precision_recall_evaluator", "ctc_error_evaluator",
"chunk_evaluator", "sum_evaluator", "column_sum_evaluator",
"value_printer_evaluator", "gradient_printer_evaluator",
"maxid_printer_evaluator", "maxframe_printer_evaluator",
"seqtext_printer_evaluator", "classification_error_printer_evaluator",
"detection_map_evaluator"
"evaluator_base",
"classification_error_evaluator",
"auc_evaluator",
"pnpair_evaluator",
"precision_recall_evaluator",
"ctc_error_evaluator",
"chunk_evaluator",
"sum_evaluator",
"column_sum_evaluator",
"value_printer_evaluator",
"gradient_printer_evaluator",
"maxid_printer_evaluator",
"maxframe_printer_evaluator",
"seqtext_printer_evaluator",
"classification_error_printer_evaluator",
"detection_map_evaluator",
]


Expand Down
25 changes: 13 additions & 12 deletions python/paddle/v2/fluid/framework.py
Original file line number Diff line number Diff line change
Expand Up @@ -116,8 +116,8 @@ def _debug_string_(proto, throw_on_error=True):
"""
error_fields = list()
if not proto.IsInitialized(error_fields) and throw_on_error:
raise ValueError("{0} are not initialized\nThe message is {1}".format(
error_fields, proto))
raise ValueError("{0} are not initialized.\nThe message is {1}:\n".
format(error_fields, proto))
return proto.__str__()


Expand Down Expand Up @@ -374,12 +374,13 @@ def __init__(self,
>>> outputs={"Out": [var1]})

Args:
block(Block): The block has the current operator
desc(core.OpDesc): The protobuf description
block(Block): The block has the current operator.
desc(core.OpDesc): The protobuf description.
type(str): The type of operator.
inputs(dict): The input dictionary. Key is the input parameter name.
Value is a list of variables.
outputs(dict): The output dictionary. Has same format with inputs
outputs(dict): The output dictionary which has the same format with
inputs.
attrs(dict): The attributes dictionary. Key is attribute name. Value
is the attribute value. The attribute type should be as same as
the type registered in C++
Expand Down Expand Up @@ -436,10 +437,11 @@ def find_name(var_list, name):
for m in proto.outputs:
need.add(m.name)
if not given == need:
raise ValueError(
"Incorrect setting for output(s) of operator \"%s\". Need: [%s] Given: [%s]"
% (type, ", ".join(str(e) for e in need), ", ".join(
str(e) for e in given)))
raise ValueError(("Incorrect setting for output(s) of "
"operator \"%s\". Need: [%s] Given: [%s]") %
(type, ", ".join(str(e)
for e in need), ", ".join(
str(e) for e in given)))

for out_proto in proto.outputs:
out_args = outputs[out_proto.name]
Expand Down Expand Up @@ -818,9 +820,8 @@ def prune(self, targets):
if isinstance(t, Variable):
t = t.op
else:
raise ValueError(
"All targets of prune() can only be Variable or Operator."
)
raise ValueError(("All targets of prune() can only be "
"Variable or Operator."))

targets_idx.append([t.block.idx, t.idx])
res = Program()
Expand Down
4 changes: 2 additions & 2 deletions python/paddle/v2/fluid/layers/io.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,9 @@ def data(name,
**Data Layer**

This function takes in the input and based on whether data has
to be returned back as a minibatch, it creates the global variable using
to be returned back as a minibatch, it creates the global variable by using
the helper functions. The global variables can be accessed by all the
following operations and layers in the graph.
following operators in the graph.

All the input variables of this function are passed in as local variables
to the LayerHelper constructor.
Expand Down
Loading