Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Relay][Frontend][ONNX] operator support: DepthToSpace, SpaceToDepth #4271

Merged

Conversation

cchung100m
Copy link
Contributor

Hi @zhiics @jroesch @hlu1

Following post at tvm forum, I add the implementation of DepthToSpace operator and SpaceToDepth operator for frontend/onnx.py

I would appreciate if you can review/manage this PR, thank you.

python/tvm/relay/frontend/onnx.py Outdated Show resolved Hide resolved
tests/python/frontend/onnx/test_forward.py Outdated Show resolved Hide resolved
@jwfromm
Copy link
Contributor

jwfromm commented Nov 9, 2019

Although your changes to the tests removing the hardcoding are definitely in the right direction, I think we should test the actual values instead of just their shapes. To do this, I recommend making a change from your current testing loop:

for target, ctx in ctx_list():
    x = np.random.uniform(size=inshape).astype('int32')
    tvm_out = get_tvm_output(model, x, target, ctx, outshape, 'float32')
tvm.testing.assert_allclose(outshape, tvm_out.shape)

to

for target, ctx in ctx_list():
    x = np.random.uniform(size=inshape).astype('int32')
    tvm_out = get_tvm_output(model, x, target, ctx, outshape, 'float32')
    onnx_out = get_caffe2_output(model, x, 'int32')
    tvm.testing.assert_allclose(onnx_out, tvm_out)

That way we can be sure that both the values and shape produced are correct.

@cchung100m
Copy link
Contributor Author

cchung100m commented Nov 9, 2019

Hi @jwfromm

Thanks for the suggestion and I change the code of testing loop.

However, I encounter an issue when running test_space_to_depth() and I would appreciate that if you can help me to solve this issue.

many thanks,

ONNX FATAL: Don't know how to translate op SpaceToDepth
/Users/root/tvm/venv/lib/python3.7/site-packages/caffe2/python/onnx/backend.py:698: UserWarning: This version of onnx-caffe2 targets ONNX operator set version 9, but the model we are trying to import uses version 10.  We will try to import it anyway, but if the model uses operators which had BC-breaking changes in the intervening versions, import will fail.
Traceback (most recent call last):

  File "/Users/root/tvm/tests/python/frontend/onnx/test_forward.py", line 1784, in <module>
    test_space_to_depth()

  File "/Users/root/tvm/tests/python/frontend/onnx/test_forward.py", line 221, in test_space_to_depth
  warnings.warn("This version of onnx-caffe2 targets ONNX operator set version {}, but the model we are trying to import uses version {}.  We will try to import it anyway, but if the model uses operators which had BC-breaking changes in the intervening versions, import will fail.".format(cls._known_opset_version, imp.version))
    verify_space_to_depth((1, 1, 4, 6), (1, 4, 2, 3), 2)

  File "/Users/root/tvm/tests/python/frontend/onnx/test_forward.py", line 212, in verify_space_to_depth
    onnx_out = get_caffe2_output(model, x, 'float32')

  File "/Users/root/tvm/tests/python/frontend/onnx/test_forward.py", line 82, in get_caffe2_output
    prepared_backend = caffe2.python.onnx.backend.prepare(model)

  File "/Users/root/tvm/venv/lib/python3.7/site-packages/caffe2/python/onnx/backend.py", line 712, in prepare
    init_net, predict_net = cls._onnx_model_to_caffe2_net(model, device, opset_version, False)

  File "/Users/root/tvm/venv/lib/python3.7/site-packages/caffe2/python/onnx/backend.py", line 910, in _onnx_model_to_caffe2_net
    raise RuntimeError('ONNX conversion failed')

RuntimeError: ONNX conversion failed

Process finished with exit code 1

@jwfromm
Copy link
Contributor

jwfromm commented Nov 10, 2019

Hmm, I guess the caffe onnx converter doesn't support SpaceToDepth. I think we really should be using the onnxruntime instead of the caffe2 runtime in the test script, however, the CI image used
doesn't have onnxruntime. @tqchen, what do you think about adding onnxruntime to CI? Is the right way to add it by putting it into docker/install/install_ubuntu_onnx.sh?

@cchung100m
Copy link
Contributor Author

Hi @jwfromm

Thanks for the prompt reply.

I agree that we can use the onnxruntime to test onnx model, therefore, I add the helper function get_onnxruntime_output to test SpaceToDepth. The code is tested successfully with onnxruntime 0.5.0 in my localhost and it would be great if we can add the onnxruntime in CI pipeline.

@tqchen
Copy link
Member

tqchen commented Nov 10, 2019

The procedure will be add onnxruntime dep to the docker/install/install_ubuntu_onnx.sh, then we will rebuild and verify the images, merge it to the master before we update the test. if you can send a PR to add onnxruntime installation(with a specific version), I will work on the docker image update incoming week

@jwfromm
Copy link
Contributor

jwfromm commented Nov 11, 2019

I've added onnxruntime to the docker image in PR #4299. Once it's merged and the CI image is rebuilt the tests in this PR should be able to pass.

@jwfromm
Copy link
Contributor

jwfromm commented Nov 15, 2019

LGTM! Thanks for putting up with all my feedback. This turned out to a be great PR. I think if you rebase onto the master branch all tests should pass now that PR #4313 is merged.

@cchung100m cchung100m force-pushed the support_operator_DepthToSpace_SpaceToDepth branch from 131c9ec to 7429475 Compare November 15, 2019 04:47
@cchung100m cchung100m force-pushed the support_operator_DepthToSpace_SpaceToDepth branch from 3bd9e27 to 0cd1f92 Compare November 15, 2019 11:39
@cchung100m
Copy link
Contributor Author

@jwfromm

It is my pleasure to work with you on this PR and I do learn a lot, thanks.

@cchung100m
Copy link
Contributor Author

Hi @jroesch @soiferj @zhiics

I would appreciate that if you can review/manage this PR, thank you.

Copy link
Member

@zhiics zhiics left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zhiics zhiics merged commit 510bd8f into apache:master Nov 15, 2019
@zhiics
Copy link
Member

zhiics commented Nov 15, 2019

Thanks @cchung100m @jwfromm

kevinthesun pushed a commit to neo-ai/tvm that referenced this pull request Nov 25, 2019
* [TOPI][OP] Support Faster-RCNN Proposal OP on CPU (apache#4297)

* Support Proposal operator on CPU.

* PyLint space issue

* PyLint space issue

* Pylint singleton-comparison issue

* [QNN][Legalize] Specialize for Platforms without any fast Int8 arithmetic units. (apache#4307)

* fix error when memory_id is VTA_MEM_ID_OUT (apache#4330)

* [CI][DOCKER] Add ONNX runtime dep (apache#4314)

* [DOCKER] Add ONNX runtime dep

* Improve ci script

* [QNN] Quantize - Fixing the sequence of lowering. (apache#4316)

* [QNN] Use Int16 upcast in Fallback Conv2D. Fix test names. (apache#4329)

* [doc][fix] fix sphinx parsing for pass infra tutorial (apache#4337)

* change ci image version (apache#4313)

* [Codegen] remove fp16 function override for cuda  (apache#4331)

* add volatile override back

* [codegen] remove fp16 function override for cuda

* [CI] Set workspace to be per executor (apache#4336)

* [Build][Windows] Fix Windows build by including cctype (apache#4319)

* Fix build

* dummy change to retrigger CI

* dummy change to retrigger ci

* dummy change to retrigger ci

* Enable hipModuleGetGlobal() (apache#4321)

* [Relay][Pass] Add pass to remove unused functions in relay module (apache#4334)

* [Relay][Pass] Add pass to remove unused functions in relay module

* Add tests

* Fix lint

* Fix visit order

* Add pass argument

* Fix

* Add support for quant. mul operator in tflite frontend (apache#4283)

A test for qnn_mul has to be added when the qnn elemwise tests (apache#4282) get merged.

* Add topi.nn.fifo_buffer to TVM doc (apache#4343)

* Solve custom model of prelu (apache#4326)

* Deprecate NNVM warning msg (apache#4333)

* [Contrib] Add MKL DNN option (apache#4323)

* [Contrib] Add MKL DNN

* update

* update

* [Relay][Frontend][TF] Fix transpose when axes is not a param (apache#4327)

* [Relay][Frontend][TF] Use _infer_value_simulated when axes is not a const to Transpose

* uncomment tests

* dummy change to retrigger ci

* [RUNTIME] Add device query for AMD GcnArch (apache#4341)

* add gcnArch query

* kGcnArch query for cuda is a no-op

* [Test][Relay][Pass] Add test case for lambda lift (apache#4317)

* [Relay][Frontend][ONNX] operator support: DepthToSpace, SpaceToDepth (apache#4271)

* imp module is deprecated (apache#4275)

* [VTA] Bug fix for padded load with large inputs (apache#4293)

* bug fix for padded load with large inputs

* Update TensorLoad.scala

* Update test_vta_insn.py

* fix inconsistent tag name (apache#4134)

* [CodeGen] Add build config option disable_assert to control whether to generate assert (apache#4340)

* Bump up CUDA log version in tophub.py (apache#4347)

* Add check to ensure input file was successfully opened in NNVM deploy code demo (apache#4315)

* [COMMUNITY] Add DISCLAIMER, KEYS for ASF release (apache#4345)

* [COMMUNITY] Add DISCLAIMER, KEYS for ASF release

* Add file name spec

* [Relay][VM][Interpreter] Enable first-class constructors in VM and interpreter via eta expansion (apache#4218)

* Fix constructor pretty printing

* Make Module::HasDef name consistent with API

* Add VM constructor compilation via eta expansion

* Lint

* Fix CI

* Fix failing test

* Address comment

* Retrigger CI

* Retrigger CI

* Update dmlc_tvm_commit_id.txt
@cchung100m cchung100m deleted the support_operator_DepthToSpace_SpaceToDepth branch March 15, 2020 10:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants