Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce number of iterations in L0 tests #3173

Merged
merged 1 commit into from
Aug 11, 2021

Conversation

jantonguirao
Copy link
Contributor

Signed-off-by: Joaquin Anton janton@nvidia.com

Description

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Refactoring (Redesign of existing code that doesn't affect functionality)
  • Other (e.g. Documentation, Tests, Configuration)

What happened in this PR

Reducing number of iterations in L0 tests, and removing redundant cpu vs gpu test.

Additional information

  • Affected modules and functionalities:

L0 python tests

  • Key points relevant for the review:

Are we OK with testing fewer iterations?

Checklist

Tests

  • Existing tests apply
  • New tests added
    • Python tests
    • GTests
    • Benchmark
    • Other
  • N/A

Documentation

  • Existing documentation applies
  • Documentation updated
    • Docstring
    • Doxygen
    • RST
    • Jupyter
    • Other
  • N/A

DALI team only

Requirements

  • Implements new requirements
  • Affects existing requirements
  • N/A

REQ IDs: N/A

JIRA TASK: DALI-1001

@@ -64,41 +64,6 @@ def define_graph(self):
images = self.cmn(images, mirror=rng)
return images

def check_cmn_cpu_vs_gpu(batch_size, dtype, output_layout, mirror_probability, mean, std, scale, shift, pad_output):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this test is redundant. Same thing is tested vs a python implementation. There's no need to test cpu vs gpu here.

@klecki
Copy link
Contributor

klecki commented Jul 27, 2021

Is the difference that significant to change from 5 to 3 iterations? In the former case we are sure to clear the output buffer queue at least twice. It has some appeal to do that IMO.

@@ -166,8 +166,7 @@ def check_multichannel_synth_data_vs_numpy(tested_operator, device, batch_size,
eii2 = RandomDataIterator(batch_size, shape=shape)
compare_pipelines(MultichannelSynthPipeline(device, batch_size, "HWC", iter(eii1), tested_operator=tested_operator),
MultichannelSynthPythonOpPipeline(get_numpy_func(tested_operator), batch_size, "HWC", iter(eii2)),
batch_size=batch_size, N_iterations=10,
eps = 0.2)
batch_size=batch_size, N_iterations=3, eps = 0.2)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it OK to reduce number of iterations for the test, which has significantly broader tolerance? Maybe we should ask the question, why this tolerance is broader?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe it is OK. Some time ago we didn't think too much about the number of iterations. 10 is just an arbitrary number, I think. At some point, we decided that doing 3 iterations was enough (more than one iteration is necessary, since there are some issues when the operator doesn't clear the state properly between iterations). If we want to test more, we normally write a different test (different arguments, different kind of data, etc). Increasing the number of iterations is not equivalent of better coverage.

Regarding why the tolerance is broader, I don't have a good answer for it. This tests a whole pipeline with several operator chained, using randomly generated data, against a reference implemented in OpenCV. I assume the bigger eps is due to the data being pure noise, some subtle differences in the implementation might result in a larger error (interpolation, for instance). In line 236 we have the same test but with actual images, and the eps is much lower there.

@@ -52,7 +52,7 @@ def test_color_twist_vs_old():
rand_it2 = RandomDataIterator(batch_size, shape=(1024, 512, 3))
compare_pipelines(ColorTwistPipeline(batch_size, seed, iter(rand_it1), kind="new"),
ColorTwistPipeline(batch_size, seed, iter(rand_it2), kind="old"),
batch_size=batch_size, N_iterations=16, eps=1)
batch_size=batch_size, N_iterations=3, eps=1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As above

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe that 16 was chosen as an arbitrary number. Testing more iterations doesn't necessarily mean that we are doing better testing. If we wanted to increase test coverage, we should probably try with different arguments, for instance.

dali/test/python/test_operator_crop.py Show resolved Hide resolved
Signed-off-by: Joaquin Anton <janton@nvidia.com>
@jantonguirao
Copy link
Contributor Author

!build

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [2739796]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [2739796]: BUILD PASSED

@jantonguirao jantonguirao merged commit 2b9df8a into NVIDIA:main Aug 11, 2021
@JanuszL JanuszL added the important-fix Fixes an important issue in the software or development environment. label Sep 29, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
important-fix Fixes an important issue in the software or development environment.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants