Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TOPI][RELAY] Add op Size #3094

Merged
merged 4 commits into from
Jul 19, 2019
Merged

[TOPI][RELAY] Add op Size #3094

merged 4 commits into from
Jul 19, 2019

Conversation

yongwww
Copy link
Member

@yongwww yongwww commented Apr 25, 2019

Add op Size. Size is used in numpy, TensorFlow and MXNet, is needed for tf object detection models like mask-rcnn.

@zhiics @kevinthesun @icemelon9 @jroesch @Laurawly

Copy link
Member

@yzhliu yzhliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also please fix the code style: 2-space indent for c++

CHECK(tt != nullptr);
const auto* param = attrs.as<SizeAttrs>();
CHECK(param != nullptr);
reporter->Assign(types[1], TensorTypeNode::make({1}, param->dtype));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like type of size in numpy is scalar?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, both numpy.size and tf.size are scalar

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but {1} here is tensor? I think we need to keep it the same as np

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch, sure I'll update the code

Copy link
Member

@zhiics zhiics left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some nits

const std::string name = "size",
const std::string tag = kInjective) {
int ndim = static_cast<int>(src->shape.size());
Array<Expr> out_size;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use out_size{1} directly?

* \return Tensor of input shape.
*/
inline Tensor size(const Tensor& src,
Type dtype,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const Type& ?

topi/src/topi.cc Outdated
@@ -300,6 +300,11 @@ TVM_REGISTER_GLOBAL("topi.shape")
*rv = shape(args[0], args[1]);
});

TVM_REGISTER_GLOBAL("topi.size")
.set_body([](TVMArgs args, TVMRetValue *rv) {
*rv = size(args[0], args[1]);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 space indent

return
tvm_input = tvm.nd.array(input, ctx=ctx)
tvm_output = tvm.nd.empty((1,), ctx=ctx, dtype=B.dtype)
print("Running on target: %s" % device)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove or log?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

simply use print here to make it consistent with all the other tests in topi. All test cases are using print.

*/
inline Tensor size(const Tensor& src,
Type dtype,
const std::string name = "size",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

either std::string or const std::string& ?

.set_attr<TOpPattern>("TOpPattern", kInjective)
.set_attr<FInferCorrectLayout>("FInferCorrectLayout",
ElemwiseArbitraryLayout)
.set_support_level(3)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change support level to 10.

@@ -607,6 +607,24 @@ def verify_gather_nd(xshape, yshape, y_data):
verify_gather_nd((2, 2, 2), (2, 2), [[0, 1], [1, 0]])


def test_size():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move the test to test_op_level10.py

@tqchen
Copy link
Member

tqchen commented Apr 27, 2019

We need to debate a bit about the API. given that numpy's size is ndarray.size, we may not want to directly use relay.size as the operator name. If it is a contrib op, consider something like relay.contrib.array_size

@yzhliu yzhliu added the status: need update need update based on feedbacks label Apr 29, 2019
@kevinthesun kevinthesun self-assigned this May 24, 2019
@icemelon
Copy link
Member

What is the status of this PR? @yongwww @tqchen

@yongwww
Copy link
Member Author

yongwww commented May 25, 2019

@icemelon9 need to address comments, especially the returned scalar representation. I am working on it.

@jroesch
Copy link
Member

jroesch commented May 31, 2019

We need to debate a bit about the API. given that numpy's size is ndarray.size, we may not want to directly use relay.size as the operator name. If it is a contrib op, consider something like relay.contrib.array_size

I think we should name tensor_size given we only talk about Tensors in Relay and not arrays.

@tqchen
Copy link
Member

tqchen commented May 31, 2019

@jroesch the naming convention is a bit debatable, given numpy use ndarray and we also use NDArray as our storage type, my guess is that array_size is fine if we want to be relatively consistent with numpy

@yongwww
Copy link
Member Author

yongwww commented Jun 5, 2019

@tqchen @jroesch both tensor_size and array_size make sense to me. TensorFlow and NumPy are using size. In PyTorch, the corresponding name is numel(number of elements of tensor. size is used to get the shape of the tensor). How about relay.contrib.numel? another candidate relay.contrib.num_element.

@tqchen
Copy link
Member

tqchen commented Jun 6, 2019

relay.contrib.num_elements seems to be a clear name

@yongwww yongwww force-pushed the master branch 4 times, most recently from 69ca070 to 48d69ac Compare June 13, 2019 16:59
@yongwww
Copy link
Member Author

yongwww commented Jun 14, 2019

@yzhliu @jroesch @zhiics @icemelon9 @tqchen @kevinthesun have incorporated comments except for the scalar representation. I tried to add scalar support in my own branch. But failed to run on graph runtime since the schedule was mutated in Halide improperly. Filed an issue.

@jroesch
Copy link
Member

jroesch commented Jun 14, 2019

Can you rebase?

Copy link
Member

@jroesch jroesch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you just address Zhi's comments?

@yongwww
Copy link
Member Author

yongwww commented Jun 14, 2019

@jroesch @zhiics rebased and fixed zhi's comments.

@a1010428282
Copy link

I wonder if this commit will continue?

@tqchen
Copy link
Member

tqchen commented Jul 16, 2019

@yongwww please rebase, rename to ndarray_size and let us merge this

@yongwww
Copy link
Member Author

yongwww commented Jul 17, 2019

@tqchen renamed. Pls take a look

@tqchen
Copy link
Member

tqchen commented Jul 17, 2019

Please rename the topi api name as well.

@yongwww
Copy link
Member Author

yongwww commented Jul 18, 2019

@tqchen updated. Seems CI hangs

@tqchen tqchen merged commit 313bc9d into apache:master Jul 19, 2019
@tqchen tqchen added status: accepted and removed status: need update need update based on feedbacks labels Jul 19, 2019
wweic pushed a commit to wweic/tvm that referenced this pull request Aug 9, 2019
wweic pushed a commit to neo-ai/tvm that referenced this pull request Sep 6, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants