Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pre-train accuracy for fbwq_half #116

Open
lihuiliullh opened this issue Jan 9, 2022 · 6 comments
Open

pre-train accuracy for fbwq_half #116

lihuiliullh opened this issue Jan 9, 2022 · 6 comments

Comments

@lihuiliullh
Copy link

May I know how much the accuracy is when using kge trains fbwq_half ?
Also, when you use kgc to train fbwq_half, did you only use train.txt?

@lihuiliullh
Copy link
Author

I used the pre-trained embedding on KGQA, the accuracy is very low. How did you learn the embedding use kge? @apoorvumang

@apoorvumang
Copy link
Collaborator

Can you please elaborate on what experiment you did, what commands you used?

@lihuiliullh
Copy link
Author

I try to use kge to get the pre-train embedding of fbwq_half, and use the pre-trained embedding in multi-hop answering on fbwq_half. @apoorvumang

@lihuiliullh
Copy link
Author

But the pre-train accuracy is very low when using kge. If I use that embedding, the kgqa accuracy is also much lower than the result in the paper. So, I want to know, how do you use kge to pre-train the model. The config file I use is listed below.

complex:
entity_embedder:
dropout: 0.44299429655075073
regularize_weight: 7.830760727899156e-12
relation_embedder:
dropout: -0.4746062345802784
regularize_weight: 1.182876478423781e-10
dataset:
name: fbwq_half
eval:
batch_size: 200
chunk_size: 25000
num_workers: 2
import:

  • complex
    lookup_embedder:
    dim: 400
    initialize_args:
    normal_:
    mean: 0.0
    std: 5.8970567449527816e-05
    regularize_args:
    p: 1
    weighted: true
    sparse: true
    model: complex
    negative_sampling:
    implementation: batch
    num_samples:
    o: 7851
    s: 2176
    shared: true
    with_replacement: false
    train:
    auto_correct: true
    batch_size: 1024
    loss_arg: .nan
    lr_scheduler: ReduceLROnPlateau
    lr_scheduler_args:
    factor: 0.95
    mode: max
    patience: 1
    threshold: 0.0001
    max_epochs: 200
    num_workers: 8
    optimizer_args:
    lr: 0.6560544891789137
    type: negative_sampling
    valid:
    early_stopping:
    min_threshold:
    epochs: 10
    metric_value: 0.1
    patience: 10

@apoorvumang
Copy link
Collaborator

Thanks for the details, let me try it with this config and I'll get back to you

@lihuiliullh
Copy link
Author

Can you share your config file to me? so I can get the same performance in the paper @apoorvumang

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants