Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Obscenely slow prediction #159

Open
tecosaur opened this issue May 11, 2022 · 6 comments
Open

Obscenely slow prediction #159

tecosaur opened this issue May 11, 2022 · 6 comments

Comments

@tecosaur
Copy link

tecosaur commented May 11, 2022

Hello,

I'd love to use DecisionTree.jl for a project I'm currently working on, as it's great in lot of ways. Speedy to train, players nicely with AbstractTrees, etc.

Unfortunately, saying the prediction performance is "not good" is putting things mildly. I did a test run with an simplified version of one of the data sets I'm working with, and recorded the training and prediction times of DecisionTree.jl as well as a number of other common random forest implementations.

Tool Train time Predict time Ratio
DecisionTree.jl 0.6s 175s 292
randomForest 24.4s 4.2s 0.17
ranger 1.9s 0.5s 0.26
sklearn 63s 1.7s 0.03

The competitiveness of the training time gives me hope that the DecisionTrees.jl should be able to be competitive with prediction performance too 🙂.

@ablaom
Copy link
Member

ablaom commented May 11, 2022

Thanks for reporting. That high prediction time is intriguing.

Could you please provide enough detail here for others to reproduce the results, using publicly available data, or synthetic data?

@tecosaur
Copy link
Author

In that example above, I've run 1000 trees on a 1-dimensional binary classification data set with ~70,000 entries. It should be pretty easy to generate something like this.

If you have a look at https://github.com/tecosaur/TreeComparison and run julia bulkreport.jl a number of reports will be generated. See forest.jl for the code running each implementation. While the results aren't as extreme, the disparity in the predict/train time ratio is still quite apparent. For example, with the Iris data:

Tool Train time Predict time Ratio
DecisionTree.jl 0.01s 0.32s 32
randomForest 1.28s 0.6s 0.47
ranger 0.08s 0.03s 0.38
sklearn 1.12s 0.1s 0.09

@tecosaur
Copy link
Author

Ok, I've started looking into this, and I've identified at least two major sub-optimalities in the design. One is the implementation of tree evaluation/prediction, the other is the design of the Leaf struct.

I'm currently trying to replace Leaf with this structure:

struct Leaf{T, N}
    features :: NTuple{N, T}
    majority :: Int
    values   :: NTuple{N, Int}
    total    :: Int
end

Which should make a prediction with probability on a leaf O(1) instead of O(n). I am unfortunately finding a few parts of the code base hard to work with though, such as src/classification/tree.jl — functions with 18 positional arguments should be outlawed!

@tecosaur
Copy link
Author

I've just done a bit more than the bare minimum, and so far the prediction with probability performance improvement is 2-10x with a sample iris dataset and a large-ish unidimensional data set. See: https://tecosaur.com/public/treeperf.html

@ablaom
Copy link
Member

ablaom commented May 16, 2022

This looks like progress to me. Do you think we could get away with marking the proposed change to the Leaf struct as non-breaking? As far as I can tell, this non-public API, and in any case, we are preserving the existing properties and first parameter.

Happy to review a PR.

@tecosaur
Copy link
Author

Ok, I was hoping to make further improvements, but it would probably be worth PRing the basic tree improvements I've made.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants