Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Floating point exception (overflow) #46

Closed
F0REacH opened this issue Sep 8, 2016 · 2 comments
Closed

Floating point exception (overflow) #46

F0REacH opened this issue Sep 8, 2016 · 2 comments

Comments

@F0REacH
Copy link
Contributor

F0REacH commented Sep 8, 2016

Used same config as in Issue #44 with CPU (changed only batch_size=45)
Looks like floating point overflow, but I can't figure what causing it. Maybe incorrect layer connection?

I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/utils/Util.cpp:144] commandline: /opt/paddle/bin/../opt/paddle/bin/paddle_trainer --config=trainer_config.py --save_dir=./model_output --job=train --use_gpu=false --trainer_count=4 --num_passes=100000 --log_period=10 --dot_period=1 --show_parameter_stats_period=1000 --test_all_data_in_one_period=1 --saving_period=100 
I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/utils/Util.cpp:113] Calling runInitFunctions
I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/utils/Util.cpp:126] Call runInitFunctions done.
[INFO 2016-09-08 02:48:21,778 networks.py:1122] The input order is [input, label]
[INFO 2016-09-08 02:48:21,778 networks.py:1129] The output order is [__cost_0__]
I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/trainer/Trainer.cpp:169] trainer mode: Normal
I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/gserver/dataproviders/PyDataProvider2.cpp:219] loading dataprovider dataprovider::process
I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/gserver/dataproviders/PyDataProvider2.cpp:219] loading dataprovider dataprovider::process
I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/gserver/gradientmachines/GradientMachine.cpp:134] Initing parameters..
I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/gserver/gradientmachines/GradientMachine.cpp:141] Init parameters done.
....I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/trainer/TrainerInternal.cpp:179]  Pass=0 Batch=4 samples=178 AvgCost=15884.9 Eval: classification_error_evaluator=0.993309 
I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/trainer/Tester.cpp:111]  Test samples=2 cost=172604 Eval: classification_error_evaluator=0.995207 
I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/gserver/gradientmachines/GradientMachine.cpp:112] Saving parameters to ./model_output/pass-00000
I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/utils/Util.cpp:219] copy trainer_config.py to ./model_output/pass-00000
....I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/trainer/TrainerInternal.cpp:179]  Pass=1 Batch=4 samples=178 AvgCost=14954.9 Eval: classification_error_evaluator=0.980111 
I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/trainer/Tester.cpp:111]  Test samples=2 cost=159166 Eval: classification_error_evaluator=0.975115 
....I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/trainer/TrainerInternal.cpp:179]  Pass=2 Batch=4 samples=178 AvgCost=14009.3 Eval: classification_error_evaluator=0.935489 
I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/trainer/Tester.cpp:111]  Test samples=2 cost=135530 Eval: classification_error_evaluator=0.871007 
... some steps ....

I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/trainer/Tester.cpp:111]  Test samples=2 cost=97979 Eval: classification_error_evaluator=0.676567 
....I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/trainer/TrainerInternal.cpp:179]  Pass=46 Batch=4 samples=178 AvgCost=8838.92 Eval: classification_error_evaluator=0.705236 
I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/trainer/Tester.cpp:111]  Test samples=2 cost=102650 Eval: classification_error_evaluator=0.730931 
....I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/trainer/TrainerInternal.cpp:179]  Pass=47 Batch=4 samples=178 AvgCost=8806.03 Eval: classification_error_evaluator=0.701997 
I /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/trainer/Tester.cpp:111]  Test samples=2 cost=91264.2 Eval: classification_error_evaluator=0.659892 
/opt/paddle/bin/paddle: line 46:  4547 Floating point exception(core dumped) ${DEBUGGER} $MYDIR/../opt/paddle/bin/paddle_trainer ${@:2}

Error repeats after ~40 passes each time I run training
Backtrace:

[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/opt/paddle/bin/../opt/paddle/bin/paddle_trainer --config=trainer_config.py --s'.
Program terminated with signal SIGFPE, Arithmetic exception.
#0  0x00007f53d32d8a15 in __ieee754_exp_avx (x=<optimized out>) at ../sysdeps/ieee754/dbl-64/e_exp.c:214
214     ../sysdeps/ieee754/dbl-64/e_exp.c: No such file or directory.
[Current thread is 1 (Thread 0x7f53cec29700 (LWP 4548))]
(gdb) bt
#0  0x00007f53d32d8a15 in __ieee754_exp_avx (x=<optimized out>) at ../sysdeps/ieee754/dbl-64/e_exp.c:214
#1  0x00007f53d329847f in __GI___exp (x=711.2794189453125) at ../sysdeps/ieee754/dbl-64/w_exp.c:26
#2  0x0000000000e2c4dd in hppl::tanh (a=-355.639709) at /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/cuda/src/hl_cpu_functions.cc:33
#3  0x0000000000a3dd22 in hppl::forward::lstm::operator() (this=0x7f53cec28050, valueIn=@0x7f53cec28014: -0.940858305, valueIg=@0x7f53cec28010: 0.999997735, valueFg=@0x7f53cec2800c: 0.999997735, valueOg=@0x7f53cec28008: 0.999997735, prevState=@0x7f53cec27ff4: -354.699646, 
    state=@0x7f53cec27ff8: -355.639709, stateAtv=@0x7f53cec27ff0: 0.368853271, output=@0x7f53cec27fec: 0.165656254, checkI=@0x7f53cec28004: -0.0588896535, checkF=@0x7f53cec28000: -0.0764867961, checkO=@0x7f53cec27ffc: -0.0473404899, actInput=0xe2c4ba <hppl::tanh(float)>, 
    actGate=0xe2c431 <hppl::sigmoid(float)>, actState=0xe2c4ba <hppl::tanh(float)>) at /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/cuda/include/hl_lstm_ops.cuh:65
#4  0x0000000000a3ec6c in hl_naive_lstm_forward_one_sequence<hppl::forward::lstm> (op=..., value=..., frameSize=6, active_node=HL_ACTIVATION_TANH, active_gate=HL_ACTIVATION_SIGMOID, active_state=HL_ACTIVATION_TANH)
    at /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/cuda/include/hl_cpu_lstm.cuh:60
#5  0x0000000000a3e662 in hl_cpu_lstm_forward<hppl::forward::lstm> (op=..., value=..., frameSize=6, active_node=HL_ACTIVATION_TANH, active_gate=HL_ACTIVATION_SIGMOID, active_state=HL_ACTIVATION_TANH) at /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/cuda/include/hl_cpu_lstm.cuh:348
#6  0x0000000000a3d94f in paddle::LstmCompute::forwardOneSequence<false> (this=0x2e423a8, value=..., frameSize=6) at /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/gserver/layers/LstmCompute.cpp:32
#7  0x0000000000a3da0f in paddle::LstmCompute::forwardBatch<false> (this=0x2e423a8, value=..., frameSize=6, batchSize=10) at /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/gserver/layers/LstmCompute.cpp:47
#8  0x0000000000a3b75d in paddle::LstmLayer::forwardBatch (this=0x2e42010, batchSize=37105, numSequences=11, starts=0x7f53b805bb40, inputValue=std::shared_ptr (count 2, weak 0) 0x7f53b80d1a10) at /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/gserver/layers/LstmLayer.cpp:501
#9  0x0000000000a38c8c in paddle::LstmLayer::forward (this=0x2e42010, passType=paddle::enumeration_wrapper::PASS_TRAIN) at /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/gserver/layers/LstmLayer.cpp:172
#10 0x0000000000ac2334 in paddle::NeuralNetwork::forward (this=0x2e1a3e0, inArgs=std::vector of length 2, capacity 2 = {...}, outArgs=0x2e10d08, passType=paddle::enumeration_wrapper::PASS_TRAIN)
    at /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/gserver/gradientmachines/NeuralNetwork.cpp:242
#11 0x0000000000ad620c in paddle::TrainerThread::forward (this=0x2e10be0) at /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/gserver/gradientmachines/MultiGradientMachine.cpp:581
#12 0x0000000000ad5ef2 in paddle::TrainerThread::computeThread (this=0x2e10be0) at /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/gserver/gradientmachines/MultiGradientMachine.cpp:519
#13 0x0000000000ad5abd in paddle::TrainerThread::<lambda()>::operator()(void) const (__closure=0x2ef45f8) at /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/gserver/gradientmachines/MultiGradientMachine.cpp:465
#14 0x0000000000adb9b2 in std::_Bind_simple<paddle::TrainerThread::start()::<lambda()>()>::_M_invoke<>(std::_Index_tuple<>) (this=0x2ef45f8) at /opt/gcc/include/c++/4.9.4/functional:1700
#15 0x0000000000adb6ed in std::_Bind_simple<paddle::TrainerThread::start()::<lambda()>()>::operator()(void) (this=0x2ef45f8) at /opt/gcc/include/c++/4.9.4/functional:1688
#16 0x0000000000adb4d2 in std::thread::_Impl<std::_Bind_simple<paddle::TrainerThread::start()::<lambda()>()> >::_M_run(void) (this=0x2ef45e0) at /opt/gcc/include/c++/4.9.4/thread:115
#17 0x00007f53d363d380 in std::execute_native_thread_routine_compat (__p=<optimized out>) at ../../../../../libstdc++-v3/src/c++11/thread.cc:110
#18 0x00007f53d7578454 in start_thread (arg=0x7f53cec29700) at pthread_create.c:333
#19 0x00007f53d2da715d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
(gdb) frame 0
#0  0x00007f53d32d8a15 in __ieee754_exp_avx (x=<optimized out>) at ../sysdeps/ieee754/dbl-64/e_exp.c:214
214     in ../sysdeps/ieee754/dbl-64/e_exp.c
(gdb) info locals
ctx = {env = {__control_word = <optimized out>, __glibc_reserved1 = <optimized out>, __status_word = <optimized out>, __glibc_reserved2 = <optimized out>, __tags = <optimized out>, __glibc_reserved3 = <optimized out>, __eip = <optimized out>, __cs_selector = <optimized out>, 
    __opcode = <optimized out>, __glibc_reserved4 = <optimized out>, __data_offset = <optimized out>, __data_selector = <optimized out>, __glibc_reserved5 = <optimized out>, __mxcsr = 39281}, updated_status = <optimized out>}
bexp = <optimized out>
t = 0.11041169086502123
eps = <optimized out>
del = <optimized out>
base = 0.11041259765625
y = 25769803776.110413
al = 1.1167387406605691
bet = -1.4572163044673799e-09
res = 1.1167377264919247
rem = -1.0141686444258574e-06
cor = -2.8229067033144067e-17
junk1 = <optimized out>
m = 1082538556
n = 1082538556
ex = <optimized out>
retval = <optimized out>
(gdb) frame 1
#1  0x00007f53d329847f in __GI___exp (x=711.2794189453125) at ../sysdeps/ieee754/dbl-64/w_exp.c:26
26      ../sysdeps/ieee754/dbl-64/w_exp.c: No such file or directory.
(gdb) info locals
z = <optimized out>
(gdb) frame 2
#2  0x0000000000e2c4dd in hppl::tanh (a=-355.639709) at /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/cuda/src/hl_cpu_functions.cc:33
33          return (2.0 / (1.0 + exp(-2.0*a))) - 1.0;
(gdb) info locals
No locals.
(gdb) frame 3
#3  0x0000000000a3dd22 in hppl::forward::lstm::operator() (this=0x7f53cec28050, valueIn=@0x7f53cec28014: -0.940858305, valueIg=@0x7f53cec28010: 0.999997735, valueFg=@0x7f53cec2800c: 0.999997735, valueOg=@0x7f53cec28008: 0.999997735, prevState=@0x7f53cec27ff4: -354.699646, 
    state=@0x7f53cec27ff8: -355.639709, stateAtv=@0x7f53cec27ff0: 0.368853271, output=@0x7f53cec27fec: 0.165656254, checkI=@0x7f53cec28004: -0.0588896535, checkF=@0x7f53cec28000: -0.0764867961, checkO=@0x7f53cec27ffc: -0.0473404899, actInput=0xe2c4ba <hppl::tanh(float)>, 
    actGate=0xe2c431 <hppl::sigmoid(float)>, actState=0xe2c4ba <hppl::tanh(float)>) at /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/cuda/include/hl_lstm_ops.cuh:65
65          stateAtv = actState(state);
(gdb) info locals
No locals.
(gdb) frame 4
#4  0x0000000000a3ec6c in hl_naive_lstm_forward_one_sequence<hppl::forward::lstm> (op=..., value=..., frameSize=6, active_node=HL_ACTIVATION_TANH, active_gate=HL_ACTIVATION_SIGMOID, active_state=HL_ACTIVATION_TANH)
    at /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/cuda/include/hl_cpu_lstm.cuh:60
60          op(rValueIn,
(gdb) info locals
i = 3
rValueIn = -0.940858305
rValueFg = 0.999997735
rCheckO = -0.0473404899
rPrevState = -354.699646
rOut = 0.165656254
valueOg = 0x7f5324c72908
rValueIg = 0.999997735
valueIn = 0x7f5324c728c0
valueFg = 0x7f5324c728f0
rCheckI = -0.0588896535
valueIg = 0x7f5324c728d8
rValueOg = 0.999997735
rCheckF = -0.0764867961
rState = -355.639709
rStateAtv = 0.368853271
(gdb) frame 5
#5  0x0000000000a3e662 in hl_cpu_lstm_forward<hppl::forward::lstm> (op=..., value=..., frameSize=6, active_node=HL_ACTIVATION_TANH, active_gate=HL_ACTIVATION_SIGMOID, active_state=HL_ACTIVATION_TANH) at /home/foreach/SOFT/BAIDU/PADDLE/Paddle/paddle/cuda/include/hl_cpu_lstm.cuh:348
348         hl_naive_lstm_forward_one_sequence(op, value, frameSize,
(gdb) info locals
No locals.

Paddle build options:

cmake -DWITH_GPU=ON -DWITH_DOC=OFF -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX=/opt/paddle ..

BLAS backend is Intel MKL 11.3.3.210 CPU is Intel i5 4690K

@qingqing01
Copy link
Contributor

Hi, @F0REacH , as you mentioned at #44 , your sequence is very long (5000 time steps) and you use 7 lstm, the Floating point exception may occur easily in this case. Maybe you can reduce the learning rate or reduce batch size. Or there are other ways...

@F0REacH
Copy link
Contributor Author

F0REacH commented Sep 8, 2016

Hi, @qingqing01 OK. With latest hl_matrix_classification_error fix training runs well on GPU. So no problem for me. Thanks for clarifying this issue.

@F0REacH F0REacH closed this as completed Sep 10, 2016
@F0REacH F0REacH mentioned this issue Sep 10, 2016
jiweibo pushed a commit to jiweibo/Paddle that referenced this issue Dec 30, 2019
qingqing01 added a commit to qingqing01/Paddle that referenced this issue Apr 30, 2020
Move cyclegan to examples and change train/test/eval to train_batch/test_batch/eval_batch.
KPatr1ck pushed a commit to KPatr1ck/Paddle that referenced this issue Sep 16, 2021
* use std::ptrdiff_t as datatype of stride (instead of int64_t) to avoid argument mismatch on some platforms.

* add complex support for fill_zeros_like

* use dynload for cufft
XiaoguangHu01 pushed a commit that referenced this issue Sep 18, 2021
* 1. add interface for fft;
2. add data type predicate;
3. fix paddle.roll.

* add fft c2c cufft kernel

* implement argument checking & op calling parts for fft_c2c and fftn_c2c

* add operator and opmaker definitions

* only register float and double for cpu.

* add common code for implementing FFT, add pocketfft as a dependency

* add fft c2c cufft kernel function

* fix bugs in python interface

* add support for c2r, r2c operators, op makers, kernels and kernel functors.

* test and fix bugs

* 1. fft_c2c function: add support for onesided=False;
2. add complex<float>, complex<double> support for concat and flip.

* 1. fft: fix python api bugs;
2. shape_op: add support for complex data types.

* fft c2c cufft kernel done with complie and link

* fix shape_op, add mkl placeholder

* remove mkl

* complete fft c2c in gpu

* 1. implement mkl-based fft, FFTC2CFunctor and common function exec_fft;
2. change the design, add input and output typename as template parameter for all FFTFunctors, update pocketfft-based implementation.

* complete fft c2c on gpu in ND

* complete fft c2c on gpu in ND

* complete fft c2c backward in ND

* fix MKL-based implementation

* Add frame op and CPU/GPU kernels.

* Add frame op forward unittest.

* Add frame op forward unittest.

* Remove axis parameter in FrameFunctor.

* Add frame op grad CPU/GPU kernels and unittest.

* Add frame op grad CPU/GPU kernels and unittest.

* Update doc string.

* Update after review and remove librosa requirement in unittest.

* Update grad kernel.

* add fft_c2r op

* Remove data allocation in TransCompute function.

* add fft r2c onesided with cpu(pocketfft/mkl) and gpu

* last fft c2r functor

* fix C2R and R2C for cufft, becase the direction is not an option in these cases.

* add fft r2c onesided with cpu(pocketfft/mkl) and gpu

* fix bugs in python APIs

* fix fft_c2r grad kernal

* fix bugs in python APIs

* add cuda fft c2r grad kernal functor

* clean code

* fix fft_c2r python API

* fill fft r2c result with conjugate symmetry (#19)

fill fft r2c result with conjugate symmetry

* add placeholder for unittests (#24)

* simple parameterize test function by auto generate test case from parm list (#25)

* miscellaneous fixes for python APIs (#26)

* add placeholder for unittests

* resize fft inputs before computation is n or s is provided.

* add complex kernels for pad and pad_grad

* simplify argument checking.

* add type promotion

* add int to float or complex promotion

* fix output data type for static mode

* fix fft's input dtype dispatch, import fft to paddle

* fix typos in axes checking (#27)

* fix typos in axes checking

* fix argument checking (#28)

* fix argument checking

* Add C2R Python layer normal and abnormal use cases (#29)

* documents and single case

* test c2r case

* New C2R Python layer normal and exception use cases

* complete rfft,rfft2,rfftn,ihfft,ihfft2,ihfftn unittest and doc string (#30)

* Documentation of the common interfaces of c2r and c2c (#31)

* Documentation of the common interfaces of c2r and c2c

* clean c++ code  (#32)

* clean code

* Add numpy-based implementation of spectral ops (#33)

* add numpy reference implementation of spectral ops

* Add fft_c2r numpy based implementation for unittest. (#34)

* add fft_c2r numpy implementation

* Add deframe op and stft/istft api. (#23)

* Add frame api

* Add deframe op and kernels.

* Add stft and istft apis.

* Add deframe api. Update stft and istft apis.

* Fix bug in frame_from_librosa function when input dims >= 3

* Rename deframe to overlap_add.

* Update istft.

* Update after code review.

* Add overlap_add op and stft/istft api unittest (#35)

* Add overlap_add op unittest.

* Register complex kernels of squeeze/unsquuze op.

* Add stft/istft api unittest.

* Add unittest for fft helper functions (#36)

* add unittests for fft helper functions. add complex kernel for roll op.

* complete static graph unittest for all public api (#37)

* Unittest of op with FFT C2C, C2R and r2c added (#38)

* documents and single case

* test c2r case

* New C2R Python layer normal and exception use cases

* Documentation of the common interfaces of c2r and c2c

* Unittest of op with FFT C2C, C2R and r2c added

Co-authored-by: lijiaqi <lijiaqi0612@163.com>

* add fft related options to CMakeLists.txt

* fix typos and clean code (#39)

* fix invisible character in mkl branch and fix error in error message

* clean code: remove docstring from unittest for signal.py.

* always convert numpy array to paddle.Tensor to avoid comparing numpy dtype with paddle dtype. (#40)

* always convert numpy array to paddle.Tensor to avoid comparing numpy dtype with paddle dtype.

* fix CI Errors: numpy dtype comparison, thrust when cuda is not available (#41)

1. always convert numpy array to paddle.Tensor to avoid comparing numpy dtype with paddle dtype.
2. promote floating point tensor to complex tensor ior fft_c2c and fft_c2r;
3. fix unittest to catch UnImplementedError and RuntimeError;
4. fix compile error by avoid using thrust when cuda is not available.
5.  fix sample code, use paddle.fft instead of paddle.tensor.fft

* remove inclusion of thrust, add __all__ list for fft (#42)

* Add api doc and update unittest. (#43)

* Add doc strings.
* Update overlap_add op unittest

* fix MKL-based FFT implementation (#44)

* fix MKL-based FFT implementation, MKL CDFT's FORWARD DOMAIN is always REAL for R2C and C2R

* remove code for debug (#45)

* use dynload for cufft (#46)

* use std::ptrdiff_t as datatype of stride (instead of int64_t) to avoid argument mismatch on some platforms.

* add complex support for fill_zeros_like

* use dynload for cufft

* Update doc and unittest. (#47)

* Add doc of frame op and overlap_add op.

* Update unittest.

* use dynload for cufft (#48)

1. use dynload for cufft
2. fix unittest;
3. temporarily disable Rocm.

* fix conflicts and merge upstream (#49)

fix conflicts and merge upstream

* fix compile error: only link dyload_cuda when cuda is available (#50)

* fix compile error: only link dyload_cuda when cuda is available

* fix dynload for cufft on windows (#51)

1. fix dynload for cufft on windows;
2. fix unittests.

* add NOMINMAX to compile on windows (#52)

 add NOMINMAX to compile on windows

* explicitly specify capture mode for lambdas (#55)

 explicitly specify capture mode for lambdas

* fix fft sample (#53)

* fix fft sample

* update scipy and numpy version for unittests of fft (#56)

update scipy and numpy version for unittests of fft

* Add static graph unittests of frame and overlap_add api. (#57)

* Remove cache of cuFFT & Disable ONEMKL (#59)

1. replace numpy.fft with scipy.fft as numpy<1.20 not support ortho norm
2. remove cache of cufft plans;
3. enhance error checking.
4. default WITH_ONEMKL to OFF

Co-authored-by: jeff41404 <jeff41404@gmail.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming9.bjyz.baidu.com>
Co-authored-by: KP <109694228@qq.com>
Co-authored-by: lijiaqi <lijiaqi0612@163.com>
Co-authored-by: Xiaoxu Chen <chenxx_id@163.com>
Co-authored-by: lijiaqi0612 <33169170+lijiaqi0612@users.noreply.github.com>
AnnaTrainingG pushed a commit to AnnaTrainingG/Paddle that referenced this issue Sep 29, 2021
* 1. add interface for fft;
2. add data type predicate;
3. fix paddle.roll.

* add fft c2c cufft kernel

* implement argument checking & op calling parts for fft_c2c and fftn_c2c

* add operator and opmaker definitions

* only register float and double for cpu.

* add common code for implementing FFT, add pocketfft as a dependency

* add fft c2c cufft kernel function

* fix bugs in python interface

* add support for c2r, r2c operators, op makers, kernels and kernel functors.

* test and fix bugs

* 1. fft_c2c function: add support for onesided=False;
2. add complex<float>, complex<double> support for concat and flip.

* 1. fft: fix python api bugs;
2. shape_op: add support for complex data types.

* fft c2c cufft kernel done with complie and link

* fix shape_op, add mkl placeholder

* remove mkl

* complete fft c2c in gpu

* 1. implement mkl-based fft, FFTC2CFunctor and common function exec_fft;
2. change the design, add input and output typename as template parameter for all FFTFunctors, update pocketfft-based implementation.

* complete fft c2c on gpu in ND

* complete fft c2c on gpu in ND

* complete fft c2c backward in ND

* fix MKL-based implementation

* Add frame op and CPU/GPU kernels.

* Add frame op forward unittest.

* Add frame op forward unittest.

* Remove axis parameter in FrameFunctor.

* Add frame op grad CPU/GPU kernels and unittest.

* Add frame op grad CPU/GPU kernels and unittest.

* Update doc string.

* Update after review and remove librosa requirement in unittest.

* Update grad kernel.

* add fft_c2r op

* Remove data allocation in TransCompute function.

* add fft r2c onesided with cpu(pocketfft/mkl) and gpu

* last fft c2r functor

* fix C2R and R2C for cufft, becase the direction is not an option in these cases.

* add fft r2c onesided with cpu(pocketfft/mkl) and gpu

* fix bugs in python APIs

* fix fft_c2r grad kernal

* fix bugs in python APIs

* add cuda fft c2r grad kernal functor

* clean code

* fix fft_c2r python API

* fill fft r2c result with conjugate symmetry (#19)

fill fft r2c result with conjugate symmetry

* add placeholder for unittests (#24)

* simple parameterize test function by auto generate test case from parm list (#25)

* miscellaneous fixes for python APIs (#26)

* add placeholder for unittests

* resize fft inputs before computation is n or s is provided.

* add complex kernels for pad and pad_grad

* simplify argument checking.

* add type promotion

* add int to float or complex promotion

* fix output data type for static mode

* fix fft's input dtype dispatch, import fft to paddle

* fix typos in axes checking (#27)

* fix typos in axes checking

* fix argument checking (#28)

* fix argument checking

* Add C2R Python layer normal and abnormal use cases (#29)

* documents and single case

* test c2r case

* New C2R Python layer normal and exception use cases

* complete rfft,rfft2,rfftn,ihfft,ihfft2,ihfftn unittest and doc string (PaddlePaddle#30)

* Documentation of the common interfaces of c2r and c2c (PaddlePaddle#31)

* Documentation of the common interfaces of c2r and c2c

* clean c++ code  (PaddlePaddle#32)

* clean code

* Add numpy-based implementation of spectral ops (PaddlePaddle#33)

* add numpy reference implementation of spectral ops

* Add fft_c2r numpy based implementation for unittest. (PaddlePaddle#34)

* add fft_c2r numpy implementation

* Add deframe op and stft/istft api. (#23)

* Add frame api

* Add deframe op and kernels.

* Add stft and istft apis.

* Add deframe api. Update stft and istft apis.

* Fix bug in frame_from_librosa function when input dims >= 3

* Rename deframe to overlap_add.

* Update istft.

* Update after code review.

* Add overlap_add op and stft/istft api unittest (PaddlePaddle#35)

* Add overlap_add op unittest.

* Register complex kernels of squeeze/unsquuze op.

* Add stft/istft api unittest.

* Add unittest for fft helper functions (PaddlePaddle#36)

* add unittests for fft helper functions. add complex kernel for roll op.

* complete static graph unittest for all public api (PaddlePaddle#37)

* Unittest of op with FFT C2C, C2R and r2c added (PaddlePaddle#38)

* documents and single case

* test c2r case

* New C2R Python layer normal and exception use cases

* Documentation of the common interfaces of c2r and c2c

* Unittest of op with FFT C2C, C2R and r2c added

Co-authored-by: lijiaqi <lijiaqi0612@163.com>

* add fft related options to CMakeLists.txt

* fix typos and clean code (PaddlePaddle#39)

* fix invisible character in mkl branch and fix error in error message

* clean code: remove docstring from unittest for signal.py.

* always convert numpy array to paddle.Tensor to avoid comparing numpy dtype with paddle dtype. (PaddlePaddle#40)

* always convert numpy array to paddle.Tensor to avoid comparing numpy dtype with paddle dtype.

* fix CI Errors: numpy dtype comparison, thrust when cuda is not available (PaddlePaddle#41)

1. always convert numpy array to paddle.Tensor to avoid comparing numpy dtype with paddle dtype.
2. promote floating point tensor to complex tensor ior fft_c2c and fft_c2r;
3. fix unittest to catch UnImplementedError and RuntimeError;
4. fix compile error by avoid using thrust when cuda is not available.
5.  fix sample code, use paddle.fft instead of paddle.tensor.fft

* remove inclusion of thrust, add __all__ list for fft (PaddlePaddle#42)

* Add api doc and update unittest. (PaddlePaddle#43)

* Add doc strings.
* Update overlap_add op unittest

* fix MKL-based FFT implementation (PaddlePaddle#44)

* fix MKL-based FFT implementation, MKL CDFT's FORWARD DOMAIN is always REAL for R2C and C2R

* remove code for debug (PaddlePaddle#45)

* use dynload for cufft (PaddlePaddle#46)

* use std::ptrdiff_t as datatype of stride (instead of int64_t) to avoid argument mismatch on some platforms.

* add complex support for fill_zeros_like

* use dynload for cufft

* Update doc and unittest. (PaddlePaddle#47)

* Add doc of frame op and overlap_add op.

* Update unittest.

* use dynload for cufft (PaddlePaddle#48)

1. use dynload for cufft
2. fix unittest;
3. temporarily disable Rocm.

* fix conflicts and merge upstream (PaddlePaddle#49)

fix conflicts and merge upstream

* fix compile error: only link dyload_cuda when cuda is available (PaddlePaddle#50)

* fix compile error: only link dyload_cuda when cuda is available

* fix dynload for cufft on windows (PaddlePaddle#51)

1. fix dynload for cufft on windows;
2. fix unittests.

* add NOMINMAX to compile on windows (PaddlePaddle#52)

 add NOMINMAX to compile on windows

* explicitly specify capture mode for lambdas (PaddlePaddle#55)

 explicitly specify capture mode for lambdas

* fix fft sample (PaddlePaddle#53)

* fix fft sample

* update scipy and numpy version for unittests of fft (PaddlePaddle#56)

update scipy and numpy version for unittests of fft

* Add static graph unittests of frame and overlap_add api. (PaddlePaddle#57)

* Remove cache of cuFFT & Disable ONEMKL (PaddlePaddle#59)

1. replace numpy.fft with scipy.fft as numpy<1.20 not support ortho norm
2. remove cache of cufft plans;
3. enhance error checking.
4. default WITH_ONEMKL to OFF

Co-authored-by: jeff41404 <jeff41404@gmail.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming9.bjyz.baidu.com>
Co-authored-by: KP <109694228@qq.com>
Co-authored-by: lijiaqi <lijiaqi0612@163.com>
Co-authored-by: Xiaoxu Chen <chenxx_id@163.com>
Co-authored-by: lijiaqi0612 <33169170+lijiaqi0612@users.noreply.github.com>
gglin001 added a commit to graphcore/Paddle-fork that referenced this issue Dec 8, 2021
zhoutianzi666 pushed a commit to zhoutianzi666/Paddle that referenced this issue May 23, 2022
seemingwang pushed a commit to seemingwang/Paddle that referenced this issue Jun 23, 2022
* reconstruct table,reduce hbm 20.8G
jack603047588 referenced this issue in jack603047588/Paddle Nov 9, 2022
support FloatMaskAucCalculator(from LU)
lizexu123 pushed a commit to lizexu123/Paddle that referenced this issue Feb 23, 2024
zmxdream pushed a commit to zmxdream/Paddle that referenced this issue Feb 27, 2024
* add fc hit rate log

---------

Co-authored-by: Yeeland <yeelandzhang@icloude.com>
hanhaowen-mt pushed a commit to hanhaowen-mt/Paddle that referenced this issue Feb 29, 2024
[MTAI-484] fix(build): modify code format for cpplint
feifei-111 pushed a commit to feifei-111/Paddle that referenced this issue Mar 10, 2024
move group_pattern.InferShardableAxes to group_pattern_util.InferShar…
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants