Skip to content

aiwillcoming/Fooocus-API

 
 

Repository files navigation

Fooocus-API

Docker Image CI

FastAPI powered API for Fooocus.

Currently loaded Fooocus version: 2.1.852.

Run with Replicate

Now you can use Fooocus-API by Replicate, the model is on konieshadow/fooocus-api.

With preset:

I believe this is the easiest way to generate image with Fooocus's power.

Reuse model files from Fooocus

You can simple copy config.txt file from your local Fooocus folder to Fooocus-API's root folder. See Customization for details.

Start app

Need python version >= 3.10, or use conda to create a new env.

conda env create -f environment.yaml
conda activate fooocus-api

Run

python main.py

On default, server is listening on 'http://127.0.0.1:8888'

CMD Flags

  • -h, --help show this help message and exit
  • --port PORT Set the listen port, default: 8888
  • --host HOST Set the listen host, default: 127.0.0.1
  • --base-url BASE_URL Set base url for outside visit, default is http://host:port
  • --log-level LOG_LEVEL Log info for Uvicorn, default: info
  • --sync-repo SYNC_REPO Sync dependent git repositories to local, 'skip' for skip sync action, 'only' for only do the sync action and not launch app
  • --skip-pip Skip automatic pip install when setup
  • --preload-pipeline Preload pipeline before start http server
  • --queue-size QUEUE_SIZE Working queue size, default: 3, generation requests exceeding working queue size will return failure
  • --queue-history QUEUE_HISTORY Finished jobs reserve size, tasks exceeding the limit will be deleted, including output image files, default: 100
  • --webhook-url WEBHOOK_URL Webhook url for notify generation result, default: None

Since v0.3.25, added CMD flags support of Fooocus. You can pass any argument which Fooocus supported.

For example, to startup image generation (need more vRAM):

python main.py --all-in-fp16 --always-gpu

For Fooocus CMD flags, see here.

Start with docker

Before use docker with GPU, you should install NVIDIA Container Toolkit first.

Run

docker run --gpus=all -e NVIDIA_DRIVER_CAPABILITIES=compute,utility -e NVIDIA_VISIBLE_DEVICES=all -p 8888:8888 konieshadow/fooocus-api

For a more complex usage:

mkdir ~/repositories
mkdir -p ~/.cache/pip

docker run --gpus=all -e NVIDIA_DRIVER_CAPABILITIES=compute,utility -e NVIDIA_VISIBLE_DEVICES=all \
    -v ~/repositories:/app/repositories \
    -v ~/.cache/pip:/root/.cache/pip \
    -p 8888:8888 konieshadow/fooocus-api

It will persistent the dependent repositories and pip cache.

You can add -e PIP_INDEX_URL={pypi-mirror-url} to docker run command to change pip index url.

Test API

You can open the Swagger Document in "http://127.0.0.1:8888/docs", then click "Try it out" to send a request.

Update logs

Please visit releases page for changes in each version.

Completed Apis

Swagger openapi defination see openapi.json.

You can import it in Swagger-UI editor.

All the generation api support for response in PNG bytes directly when request's 'Accept' header is 'image/png'.

All the generation api support async process by pass parameter async_process to true. And then use query job api to retrieve progress and generation results.

Break changes from v0.3.26

  • The job_id field from Query Job and Query Job Queue Info apis change type to str. It's an uuid now, which will avoid conflict between each startup.

Break changes from v0.3.25

  • Removed cli argument disable-private-log. You can use Fooocus's --disable-image-log for the same purpose.

Break changes from v0.3.24:

  • This version merged Fooocus v2.1.839, which include a seed breaking change. Details for 2.1.839.

Break changes from v0.3.16:

  • Parameter format for loras has changed for the img2img apis (the multipart/form-data requests). Now it requires JSON string.

Break changes from v0.3.0:

  • The generation apis won't return base64 field unless request parameters set require_base64 to true.
  • The generation apis return a url field where the generated image can be requested via a static file url.

Break changes from v0.3.21:

  • The seed field from generation result change to type String to avoid numerical overflow.

Text to Image

POST /v1/generation/text-to-image

Alternative api for the normal image generation of Fooocus Gradio interface.

Image Upscale or Variation

For multipart/form-data request:

POST /v1/generation/image-upscale-vary

For application/json request:

POST /v2/generation/image-upscale-vary

Alternative api for 'Upscale or Variation' tab of Fooocus Gradio interface.

Image Inpaint or Outpaint

For multipart/form-data request:

POST /v1/generation/image-inpait-outpaint

For application/json request:

POST /v2/generation/image-inpait-outpaint

Alternative api for 'Inpaint or Outpaint' tab of Fooocus Gradio interface.

Image Prompt

For multipart/form-data request:

POST /v1/generation/image-prompt

For application/json request:

POST /v1/generation/image-prompt

Alternative api for 'Image Prompt' tab of Fooocus Gradio interface.

Query Job

GET /v1/generation/query-job

Query async generation request results, return job progress and generation results.

You can get preview image of generation steps at current time by this api.

Query Job Queue Info

GET /v1/generation/job-queue

Query job queue info, include running job count, finished job count and last job id.

Stop Generation task

POST /v1/generation/stop

Stop current generation task.

Get All Model Names

GET /v1/engines/all-models

Get all filenames of base model and lora.

Refresh Models

POST /v1/engines/refresh-models

Get All Fooocus Styles

GET /v1/engines/styles

Get all legal Fooocus styles.

About

FastAPI powered API for Fooocus

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 99.6%
  • Dockerfile 0.4%