Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propagate layout in fn.reductions #4947

Merged
merged 3 commits into from
Jul 25, 2023
Merged

Conversation

stiepan
Copy link
Member

@stiepan stiepan commented Jul 18, 2023

Category:

Bug fix/New feature

Description:

Adds missing layout propagation to the family of fn.reductions. The layout propagation rules are simple:

  • If the input has no layout, the output does not have it too.
  • If the keep_dims is True, the input layout is just copied to the output.
  • Otherwise, the reduced extents are removed from the layout.

Additional information:

Here's a real-life example of an augmentation that can benefit from the layout propagation, otherwise the center = .. line would not work.

def contrast(data, parameter=1.8):
    mean = fn.reductions.mean(data, axis_names="HW", keep_dims=True)
    rgb_weights = types.Constant(np.array([0.299, 0.587, 0.114], dtype=np.float32))
    center = fn.reductions.sum(mean * rgb_weights, axis_names="C", keep_dims=True)
    return fn.cast_like(center + (data - center) * parameter, data)

Affected modules and functionalities:

Key points relevant for the review:

Tests:

  • Existing tests apply
  • New tests added
    • Python tests
    • GTests
    • Benchmark
    • Other
  • N/A

Checklist

Documentation

  • Existing documentation applies
  • Documentation updated
    • Docstring
    • Doxygen
    • RST
    • Jupyter
    • Other
  • N/A

DALI team only

Requirements

  • Implements new requirements
  • Affects existing requirements
  • N/A

REQ IDs: N/A

JIRA TASK: DALI-3554

@stiepan
Copy link
Member Author

stiepan commented Jul 18, 2023

!build

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [9014738]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [9014738]: BUILD FAILED

Comment on lines 32 to 41
int in_ndim = layout.size();
assert(in_ndim >= 0 && in_ndim <= 64);
assert(axes.size() <= in_ndim);
uint64_t mask = 0;
for (auto a : axes) {
assert(0 <= a && a < in_ndim);
uint64_t a_mask = 1_u64 << a;
assert(!(mask & a_mask)); // axes must be unique for the correct out layout dim
mask |= a_mask;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of defensively double-checking the axes, you can simply do uint64_t mask = to_bit_mask(axes);

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment on lines 42 to 39
int out_ndim = in_ndim - axes.size();
TensorLayout out_layout;
if (out_ndim <= 0) {
return out_layout;
}
out_layout.resize(out_ndim);
for (int in_idx = 0, out_idx = 0; in_idx < in_ndim; in_idx++) {
if (!(mask & (1_u64 << in_idx))) {
out_layout[out_idx++] = layout[in_idx];
}
}
Copy link
Contributor

@mzient mzient Jul 19, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simpler:

Suggested change
int out_ndim = in_ndim - axes.size();
TensorLayout out_layout;
if (out_ndim <= 0) {
return out_layout;
}
out_layout.resize(out_ndim);
for (int in_idx = 0, out_idx = 0; in_idx < in_ndim; in_idx++) {
if (!(mask & (1_u64 << in_idx))) {
out_layout[out_idx++] = layout[in_idx];
}
}
TensorLayout out_layout;
for (int idx = 0; idx < layout.size(); idx++) {
if (!(mask & (1_u64 << idx))) {
out_layout += layout[idx];
}
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice. I missed that += op. It lets me drop the unique axes assumption.

}

template <typename Input, typename Output, typename Axes>
inline void PropagateLayout(const Input &input, Output &output, Axes &&axes, bool keep_dims) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a nitpick, but we typically do (output, input, params).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Signed-off-by: Kamil Tokarski <ktokarski@nvidia.com>
Signed-off-by: Kamil Tokarski <ktokarski@nvidia.com>
Signed-off-by: Kamil Tokarski <ktokarski@nvidia.com>
@stiepan
Copy link
Member Author

stiepan commented Jul 25, 2023

!build

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [9100773]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [9100773]: BUILD PASSED

@stiepan stiepan merged commit fb833b2 into NVIDIA:main Jul 25, 2023
3 checks passed
JanuszL pushed a commit to JanuszL/DALI that referenced this pull request Oct 13, 2023
Set layout to outputs of the family of fn.reductions (if not set already by the the specific implementation).

---------
Signed-off-by: Kamil Tokarski <ktokarski@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants