Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for the Module class. #33

Merged
merged 11 commits into from
Aug 23, 2021
Merged

Add support for the Module class. #33

merged 11 commits into from
Aug 23, 2021

Conversation

wzzju
Copy link
Owner

@wzzju wzzju commented Aug 19, 2021

PR types

New features

PR changes

Others

Describe

  • Add support for the Module class, which contains several instructions. A module can be created from a protobuf, shown as below:
ModuleProto module_proto;
// initialize the content of module_proto
// ...
Module m(module_proto);
  • Add the RawIterator class, which is an iterator adapter, by which we can get the raw pointer from a smart pointer. With the help of RawIterator and boost::iterator_range, we can traverse instructions in a function or functions in a module without the use of extra space. The usage demo is described as below:
for(Function &instr : module->functions()) {
// ...
}

for(Instruction &instr : function->instructions()) {
// ...
}
167: Test command: /work/Paddle/build/paddle/fluid/compiler/piano/note/note_ir_test
167: Environment variables: 
167:  FLAGS_cudnn_deterministic=true
167: Test timeout computed to be: 10000000
167: [==========] Running 6 tests from 1 test case.
167: [----------] Global test environment set-up.
167: [----------] 6 tests from IrTest
167: [ RUN      ] IrTest.ModuleToString
167: WARNING: Logging before InitGoogleLogging() is written to STDERR
167: I0823 03:43:43.420991  7334 note_ir_test.cc:221] Module string:
167: Module union_12510013719728903619
167: 
167: [[entry]] def %union_12510013719728903619(arg1.1: f32[3, 6]{}, arg2.2: f32[3, 6]{}) -> f32[3, 6]{} {
167:   %arg1.1 = f32[3, 6]{} parameter(), test_bools={true, false}, test_double=3.141000_d, test_ints={8_i, 26_i}, test_strings={"hello", "world"}
167:   %arg2.2 = f32[3, 6]{} parameter(), test_bool=true, test_doubles={5.660000_d, 6.660000_d}, test_floats={8.600000_f, 7.600000_f}, test_longs={8_l, 16_l}
167:   return %add.3 = f32[3, 6]{} add(f32[3, 6]{} %arg1.1, f32[3, 6]{} %arg2.2), test_float=-1.414000_f, test_int=-1_i, test_long=-100_l, test_string="Add"
167: }
167: 
167: [       OK ] IrTest.ModuleToString (1 ms)
167: [ RUN      ] IrTest.ModuleToProto
167: I0823 03:43:43.421424  7334 note_ir_test.cc:229] The module prototext:
167: name: "union_12510013719728903619"
167: entry_function_name: "union_12510013719728903619"
167: functions {
167:   name: "union_12510013719728903619"
167:   instructions {
167:     name: "arg1.1"
167:     opcode: "parameter"
167:     shape {
167:       element_type: F32
167:       dimensions: 3
167:       dimensions: 6
167:     }
167:     id: 1
167:     parameter_index: 0
167:     attrs {
167:       key: "test_bools"
167:       value {
167:         bools {
167:           value: true
167:           value: false
167:         }
167:       }
167:     }
167:     attrs {
167:       key: "test_double"
167:       value {
167:         d: 3.141
167:       }
167:     }
167:     attrs {
167:       key: "test_ints"
167:       value {
167:         ints {
167:           value: 8
167:           value: 26
167:         }
167:       }
167:     }
167:     attrs {
167:       key: "test_strings"
167:       value {
167:         strings {
167:           value: "hello"
167:           value: "world"
167:         }
167:       }
167:     }
167:   }
167:   instructions {
167:     name: "arg2.2"
167:     opcode: "parameter"
167:     shape {
167:       element_type: F32
167:       dimensions: 3
167:       dimensions: 6
167:     }
167:     id: 2
167:     parameter_index: 1
167:     attrs {
167:       key: "test_bool"
167:       value {
167:         b: true
167:       }
167:     }
167:     attrs {
167:       key: "test_doubles"
167:       value {
167:         doubles {
167:           value: 5.66
167:           value: 6.66
167:         }
167:       }
167:     }
167:     attrs {
167:       key: "test_floats"
167:       value {
167:         floats {
167:           value: 8.6
167:           value: 7.6
167:         }
167:       }
167:     }
167:     attrs {
167:       key: "test_longs"
167:       value {
167:         longs {
167:           value: 8
167:           value: 16
167:         }
167:       }
167:     }
167:   }
167:   instructions {
167:     name: "add.3"
167:     opcode: "add"
167:     shape {
167:       element_type: F32
167:       dimensions: 3
167:       dimensions: 6
167:     }
167:     id: 3
167:     operand_ids: 1
167:     operand_ids: 2
167:     attrs {
167:       key: "test_float"
167:       value {
167:         f: -1.414
167:       }
167:     }
167:     attrs {
167:       key: "test_int"
167:       value {
167:         i: -1
167:       }
167:     }
167:     attrs {
167:       key: "test_long"
167:       value {
167:         l: -100
167:       }
167:     }
167:     attrs {
167:       key: "test_string"
167:       value {
167:         s: "Add"
167:       }
167:     }
167:   }
167:   signature {
167:     parameters {
167:       element_type: F32
167:       dimensions: 3
167:       dimensions: 6
167:     }
167:     parameters {
167:       element_type: F32
167:       dimensions: 3
167:       dimensions: 6
167:     }
167:     parameter_names: "arg1.1"
167:     parameter_names: "arg2.2"
167:     result {
167:       element_type: F32
167:       dimensions: 3
167:       dimensions: 6
167:     }
167:   }
167:   id: 4
167:   return_id: 3
167: }
167: entry_function_signature {
167:   parameters {
167:     element_type: F32
167:     dimensions: 3
167:     dimensions: 6
167:   }
167:   parameters {
167:     element_type: F32
167:     dimensions: 3
167:     dimensions: 6
167:   }
167:   parameter_names: "arg1.1"
167:   parameter_names: "arg2.2"
167:   result {
167:     element_type: F32
167:     dimensions: 3
167:     dimensions: 6
167:   }
167: }
167: id: 4
167: entry_function_id: 4
167: [       OK ] IrTest.ModuleToProto (0 ms)
167: [ RUN      ] IrTest.ModuleDetails
167: I0823 03:43:43.422032  7334 note_ir_test.cc:249] union_12510013719728903619
167: [       OK ] IrTest.ModuleDetails (1 ms)
167: [ RUN      ] IrTest.FunctionDetails
167: I0823 03:43:43.422127  7334 note_ir_test.cc:273] arg1.1
167: I0823 03:43:43.422143  7334 note_ir_test.cc:273] arg2.2
167: I0823 03:43:43.422154  7334 note_ir_test.cc:273] add.3
167: [       OK ] IrTest.FunctionDetails (0 ms)
167: [ RUN      ] IrTest.InstructionDetails
167: [       OK ] IrTest.InstructionDetails (0 ms)
167: [ RUN      ] IrTest.VisitAttr
167: [       OK ] IrTest.VisitAttr (0 ms)
167: [----------] 6 tests from IrTest (2 ms total)
167: 
167: [----------] Global test environment tear-down
167: [==========] 6 tests from 1 test case ran. (2 ms total)
167: [  PASSED  ] 6 tests.
1/1 Test #167: note_ir_test .....................   Passed    0.01 sec

The following tests passed:
	note_ir_test

100% tests passed, 0 tests failed out of 1

@wzzju wzzju changed the title Note module Add support for Module class. Aug 20, 2021
@wzzju wzzju marked this pull request as ready for review August 23, 2021 03:42
@wzzju wzzju changed the title Add support for Module class. Add support for the Module class. Aug 23, 2021
// return an instruction included in this function by the given index
Instruction *instruction(std::int64_t idx) const {
PADDLE_ENFORCE_NOT_NULL(
instructions_.at(idx).get(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这儿是不是再加个判断idx < instructions_.size()比较好啊?否则at(idx)会抛异常

Copy link
Owner Author

@wzzju wzzju Aug 23, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

我们一般也是使用PADDLE_ENFORCE来检测是否下标越界,这里也正是使用at(idx)的原因。不过可以考虑要不要换成PADDLE_ENFORCE来检查。

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已改。

return param_instrs_.at(idx);
const Instruction &param_instr(std::int64_t idx) const {
PADDLE_ENFORCE_NOT_NULL(
return_instr_,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这儿应该是param_instrs_而不是return_instr_吧?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是的,谢谢。

thisjiang
thisjiang previously approved these changes Aug 23, 2021
Copy link
Collaborator

@thisjiang thisjiang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

}

const Instruction *instruction(std::int64_t idx) const {
// return an instruction included in this function by the given index
Instruction *instruction(std::int64_t idx) const {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

返回的instruction调用方有可能会修改吗

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这是暂时不确定,我这么改之后const对象和非const对象返回的instruction在外部都是可以修改的,给了比较大的权限。你觉得需要限制吗?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

需要修改啊

// BaseIterator<IteratorT>::pointer
// ...
template <typename IteratorT>
using BaseIterator =
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BaseIterator和RawIterator名字比较泛,可以多加点限定,比如这里特指了forward的,还有IteratorT也指向的是smart ptr。另外,最好加上这个class的单测,方便后续其它开发同学大概了解怎么使用的

Copy link
Owner Author

@wzzju wzzju Aug 23, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

名字已改。这部分代码是有单测的。

  int instr_cout = 0;
  for (Instruction& instr : func->instructions()) {
    instr_cout++;
    LOG(INFO) << instr.name();
  }
  ASSERT_EQ(instr_cout, 3);

上述代码就是测试这个类,单独测试没法加的,毕竟其是在循环中才会使用到。

SunNy820828449
SunNy820828449 previously approved these changes Aug 23, 2021
}

const Instruction *instruction(std::int64_t idx) const {
// return an instruction included in this function by the given index
Instruction *instruction(std::int64_t idx) const {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

需要修改啊

Copy link
Collaborator

@CtfGo CtfGo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@CtfGo CtfGo merged commit 988e2d8 into paddle_compiler Aug 23, 2021
wzzju pushed a commit that referenced this pull request Oct 19, 2021
* 1. add interface for fft;
2. add data type predicate;
3. fix paddle.roll.

* add fft c2c cufft kernel

* implement argument checking & op calling parts for fft_c2c and fftn_c2c

* add operator and opmaker definitions

* only register float and double for cpu.

* add common code for implementing FFT, add pocketfft as a dependency

* add fft c2c cufft kernel function

* fix bugs in python interface

* add support for c2r, r2c operators, op makers, kernels and kernel functors.

* test and fix bugs

* 1. fft_c2c function: add support for onesided=False;
2. add complex<float>, complex<double> support for concat and flip.

* 1. fft: fix python api bugs;
2. shape_op: add support for complex data types.

* fft c2c cufft kernel done with complie and link

* fix shape_op, add mkl placeholder

* remove mkl

* complete fft c2c in gpu

* 1. implement mkl-based fft, FFTC2CFunctor and common function exec_fft;
2. change the design, add input and output typename as template parameter for all FFTFunctors, update pocketfft-based implementation.

* complete fft c2c on gpu in ND

* complete fft c2c on gpu in ND

* complete fft c2c backward in ND

* fix MKL-based implementation

* Add frame op and CPU/GPU kernels.

* Add frame op forward unittest.

* Add frame op forward unittest.

* Remove axis parameter in FrameFunctor.

* Add frame op grad CPU/GPU kernels and unittest.

* Add frame op grad CPU/GPU kernels and unittest.

* Update doc string.

* Update after review and remove librosa requirement in unittest.

* Update grad kernel.

* add fft_c2r op

* Remove data allocation in TransCompute function.

* add fft r2c onesided with cpu(pocketfft/mkl) and gpu

* last fft c2r functor

* fix C2R and R2C for cufft, becase the direction is not an option in these cases.

* add fft r2c onesided with cpu(pocketfft/mkl) and gpu

* fix bugs in python APIs

* fix fft_c2r grad kernal

* fix bugs in python APIs

* add cuda fft c2r grad kernal functor

* clean code

* fix fft_c2r python API

* fill fft r2c result with conjugate symmetry (#19)

fill fft r2c result with conjugate symmetry

* add placeholder for unittests (#24)

* simple parameterize test function by auto generate test case from parm list (#25)

* miscellaneous fixes for python APIs (#26)

* add placeholder for unittests

* resize fft inputs before computation is n or s is provided.

* add complex kernels for pad and pad_grad

* simplify argument checking.

* add type promotion

* add int to float or complex promotion

* fix output data type for static mode

* fix fft's input dtype dispatch, import fft to paddle

* fix typos in axes checking (#27)

* fix typos in axes checking

* fix argument checking (#28)

* fix argument checking

* Add C2R Python layer normal and abnormal use cases (#29)

* documents and single case

* test c2r case

* New C2R Python layer normal and exception use cases

* complete rfft,rfft2,rfftn,ihfft,ihfft2,ihfftn unittest and doc string (#30)

* Documentation of the common interfaces of c2r and c2c (#31)

* Documentation of the common interfaces of c2r and c2c

* clean c++ code  (#32)

* clean code

* Add numpy-based implementation of spectral ops (#33)

* add numpy reference implementation of spectral ops

* Add fft_c2r numpy based implementation for unittest. (#34)

* add fft_c2r numpy implementation

* Add deframe op and stft/istft api. (#23)

* Add frame api

* Add deframe op and kernels.

* Add stft and istft apis.

* Add deframe api. Update stft and istft apis.

* Fix bug in frame_from_librosa function when input dims >= 3

* Rename deframe to overlap_add.

* Update istft.

* Update after code review.

* Add overlap_add op and stft/istft api unittest (#35)

* Add overlap_add op unittest.

* Register complex kernels of squeeze/unsquuze op.

* Add stft/istft api unittest.

* Add unittest for fft helper functions (#36)

* add unittests for fft helper functions. add complex kernel for roll op.

* complete static graph unittest for all public api (#37)

* Unittest of op with FFT C2C, C2R and r2c added (#38)

* documents and single case

* test c2r case

* New C2R Python layer normal and exception use cases

* Documentation of the common interfaces of c2r and c2c

* Unittest of op with FFT C2C, C2R and r2c added

Co-authored-by: lijiaqi <lijiaqi0612@163.com>

* add fft related options to CMakeLists.txt

* fix typos and clean code (#39)

* fix invisible character in mkl branch and fix error in error message

* clean code: remove docstring from unittest for signal.py.

* always convert numpy array to paddle.Tensor to avoid comparing numpy dtype with paddle dtype. (#40)

* always convert numpy array to paddle.Tensor to avoid comparing numpy dtype with paddle dtype.

* fix CI Errors: numpy dtype comparison, thrust when cuda is not available (#41)

1. always convert numpy array to paddle.Tensor to avoid comparing numpy dtype with paddle dtype.
2. promote floating point tensor to complex tensor ior fft_c2c and fft_c2r;
3. fix unittest to catch UnImplementedError and RuntimeError;
4. fix compile error by avoid using thrust when cuda is not available.
5.  fix sample code, use paddle.fft instead of paddle.tensor.fft

* remove inclusion of thrust, add __all__ list for fft (#42)

* Add api doc and update unittest. (#43)

* Add doc strings.
* Update overlap_add op unittest

* fix MKL-based FFT implementation (#44)

* fix MKL-based FFT implementation, MKL CDFT's FORWARD DOMAIN is always REAL for R2C and C2R

* remove code for debug (#45)

* use dynload for cufft (#46)

* use std::ptrdiff_t as datatype of stride (instead of int64_t) to avoid argument mismatch on some platforms.

* add complex support for fill_zeros_like

* use dynload for cufft

* Update doc and unittest. (#47)

* Add doc of frame op and overlap_add op.

* Update unittest.

* use dynload for cufft (#48)

1. use dynload for cufft
2. fix unittest;
3. temporarily disable Rocm.

* fix conflicts and merge upstream (#49)

fix conflicts and merge upstream

* fix compile error: only link dyload_cuda when cuda is available (PaddlePaddle#50)

* fix compile error: only link dyload_cuda when cuda is available

* fix dynload for cufft on windows (PaddlePaddle#51)

1. fix dynload for cufft on windows;
2. fix unittests.

* add NOMINMAX to compile on windows (PaddlePaddle#52)

 add NOMINMAX to compile on windows

* explicitly specify capture mode for lambdas (PaddlePaddle#55)

 explicitly specify capture mode for lambdas

* fix fft sample (PaddlePaddle#53)

* fix fft sample

* update scipy and numpy version for unittests of fft (PaddlePaddle#56)

update scipy and numpy version for unittests of fft

* Add static graph unittests of frame and overlap_add api. (PaddlePaddle#57)

* Remove cache of cuFFT & Disable ONEMKL (PaddlePaddle#59)

1. replace numpy.fft with scipy.fft as numpy<1.20 not support ortho norm
2. remove cache of cufft plans;
3. enhance error checking.
4. default WITH_ONEMKL to OFF

Co-authored-by: jeff41404 <jeff41404@gmail.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming9.bjyz.baidu.com>
Co-authored-by: KP <109694228@qq.com>
Co-authored-by: lijiaqi <lijiaqi0612@163.com>
Co-authored-by: Xiaoxu Chen <chenxx_id@163.com>
Co-authored-by: lijiaqi0612 <33169170+lijiaqi0612@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants