Skip to content

Releases: KoboldAI/KoboldAI-Client

Looking for our latest KoboldAI product?

21 Aug 20:59
Compare
Choose a tag to compare

At the time of writing this post the current official KoboldAI branch is outdated and behind in model support.
Want to run the latest models? Want to avoid large downloads and installations?
Check out KoboldCpp, our GGUF based solution.

Need a model for KoboldCpp?

  1. Go to Huggingface and look for GGUF models if you want the GGUF for a specific model search for a part of the name of your model followed by GGUF to find GGUF releases.
  2. Go the files tab and pick the file size that best fits your hardware, Q4_K_S is a good balance.
  3. Click the small download icon right to the filename to download your GGUF.
  4. Load the GGUF in KoboldCpp, you can now use the AI.

KoboldCpp is available for Windows, Linux and ARM MacOS

(Files attached below are automatically posted by Github and do not work, please use the link above to obtain the release).

1.19.2

20 Nov 15:27
f2077b8
Compare
Choose a tag to compare
  • Sampling Order loading is fixed
  • More models
  • Flask_session cleaned on launch (Helps against bugs caused by switching between versions)
  • First base of GPU softprompt tuning (Interface is not yet added)
  • Compatibility improvements when other versions of conda are installed

1.19.1

12 Oct 13:40
59e3a40
Compare
Choose a tag to compare

This is a small release adding a few improvements including a patch for a pytorch vulnerability that is not fixed upstream.

  • Malicious Pytorch models now give an error instead of executing malicious code
  • API can now influence the seed
  • LUA Error's are now correctly shown as errors instead of debug messages.

1.19.0

04 Oct 14:45
cf3aebb
Compare
Choose a tag to compare
Merge pull request #161 from henk717/united

Release 1.19

1.18.2

01 Oct 03:01
Compare
Choose a tag to compare

The final release for 1.18