Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Central network-based database container for multiple Frigate processors #3305

Closed
markfrancisonly opened this issue Jun 5, 2022 · 14 comments
Labels
enhancement New feature or request stale

Comments

@markfrancisonly
Copy link

markfrancisonly commented Jun 5, 2022

Describe what you are trying to accomplish and why in non technical terms

Capability to host database container separate from frigate that supports frigate object recognition and storage scale out across multiple low-power video processing containers. SQLite does not support this deployment scenario well. SQLite is also limited in performance at scale.

Describe the solution you'd like

  • ability to run a frigate metadata database container independent of frigate video processing unit
  • frigate user interface that is able to read from a central database with multiple concurrent writers and load video from multiple independent frigate video servers

Usage context and design considerations

Frigate currently supports "pseudo" scale out deployment using Home Assistant to create a single surface to consume data from multiple frigate video processing servers, yet Frigate is so much larger than its Home Assistant integration! I'm a huge fan of Home Assistant, but the ha media browser integration doesn't meet my needs.

Multiple frigate container database silos can have a single surface in Home Assistant, but Frigate itself isn't capable of providing a singular video viewing surface. Current pseudo scale out is multiple Frigate databases that are unaware of each other. A central database over multiple video processing containers provides a true scale out model.

Example usage:

Deployment with 40 cameras, need to see events from all 40 cameras in one place. With a central database, it is possible to run 10 Frigate video processing servers, each with a separate storage location, that feed a single metadata repository of clips and recordings.

Food for thought: frigate server could be a camera itself ..... 😃 btw, this scaling design is similar to the milestone xprotect scale out model that has proven successful for large organizations

@markfrancisonly markfrancisonly added the enhancement New feature or request label Jun 5, 2022
@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Jun 6, 2022

Peewee (current python db library) seems to support a handful of different database types so this should not be extremely difficult, but likely still relatively complicated to get going and tested.

I am not familiar with cockroach, but I think PostreSQL is suitable for this type of use case and would be a good starting point

@rgriffogoes
Copy link

I see usage even in smaller deployments (e.g.: edge TPU-enabled device for event detection, ffmpeg+file storage on NAS server).

I guess the "easy" workaround would be to mount nfs shares and store data there (clips, recordings, sqlite files). No idea on stability with this "solution"

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Jun 7, 2022

I see usage even in smaller deployments (e.g.: edge TPU-enabled device for event detection, ffmpeg+file storage on NAS server).

I guess the "easy" workaround would be to mount nfs shares and store data there (clips, recordings, sqlite files). No idea on stability with this "solution"

Mounting shares works great as it works through the standard docker flow, many users already use frigate this way

@kbrouder
Copy link

kbrouder commented Jun 7, 2022

+1 on this. At 10 cameras and limited ability to get corals that work with my systems I was having trouble figuring out how to scale out my system appropriately. Any device I could get the m.2 tpu to work on didn't have good enough gpu for decoding plus I was losing the benefits of my server (network link agg, raid, etc)...I finally found some mini pcie corals and ordered a motherboard I hope will support them.... but it would be nice to be able to scale multiple workers into a system.

The main feature missing with the current approach is a unified event viewer. If that was added to the Lovelace card or in HASS potentially this isn't needed, but from an architecture future perspective I think it would still be better to have at the frigate level. One idea with the zones it would be possible to link events together through multiple cameras, also from a configuration perspective maybe it's possible to set it to like micro service workers? Not just detection vs recording but also a worker for db cleanup (removing expired clips etc)

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Jun 7, 2022

@kbrouder to be clear, the system that does the decoding needs to do the detection as well, I don't believe it's feasible to separate those out

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Jun 7, 2022

Using a combination of network share and a central database I think the unified view should be feasible, curious what @blakeblackshear thinks as I may be missing something

@kbrouder
Copy link

kbrouder commented Jun 7, 2022

@kbrouder to be clear, the system that does the decoding needs to do the detection as well, I don't believe it's feasible to separate those out

Yeah that makes sense.... that would have to be a whole rearchitecture where the decoder sends the image via api to a detection system. That's probably not worth it, but a unified interface where several clusters can contribute to a master, or even a master that can read from child databases would be helpful.

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Jun 7, 2022

that would have to be a whole rearchitecture where the decoder sends the image via api to a detection system

the problem in general is latency too, the latency might be too high for the real-time approach that frigate has.

@markfrancisonly
Copy link
Author

markfrancisonly commented Jun 8, 2022

very useful for defining the scope of the enhancement.

my assumption is pointing peewee to a mariadb or other supported database is a relatively easy task, and worthy alone because having the option to externally connect to the frigate database provides a foundation for new and interesting capabilities, including better horizontal scaling.

taking on the goal of supporting horizontal scaling using a common user interface and shared database, I suggest limiting this FR scope to shared network storage. here's why, to illustrate with and without shared storage access:


horizontal scaling multiple frigate instances w/ common database and shared storage

advantages:

  • all cameras, recordings and events are available in one interface without relying on a 3rd party solution
  • one or all frigate instances are capable of serving the user interface / load balancing is a possibility

required work:

  • configuration published to the database at startup
  • ui updates to read/write deployment configuration from database
  • database concurrency testing
  • possible storage management features updates

horizontal scaling multiple frigate instances w/ common database without shared storage access

advantages:

  • each frigate instance serves its own media
  • great load distribution, in terms of storage and processor intensive aspects of ui
  • storage management coordination maybe unnecessary

disadvantages compared to shared storage:

  • each frigate instance must serve its own media
  • far more difficult work
  • networking and proxy complexity increased

required work:

  • new api features to serve media
  • architectural-level rework of ui to combine media feeds
  • configuration published to the database at startup
  • ui updates to read/write deployment configuration from database
  • database concurrency testing

the more I consider these two scenarios, the more I believe independent non-shared storage should be out-of-scope for the current feature request due to the level of effort. A single user interface over multiple databases should also be out-of-scope, imo

@runningman84
Copy link

Maybe using something like S3 (or minio) could be used for storage? This would also allow to have tags attached to the objects.

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Jun 8, 2022

the more I consider these two scenarios, the more I believe independent non-shared storage should be out-of-scope for the current feature request due to the level of effort. A single user interface over multiple databases should also be out-of-scope, imo

I agree here, scope crepe could make this very difficult to tackle at one time and I think this is a great starting point. The storage part using network shares should (I think) come mostly for free since multiple servers could point to the same network share and not overwrite each other's files.

@markfrancisonly
Copy link
Author

Maybe using something like S3 (or minio) could be used for storage? This would also allow to have tags attached to the objects.

In consequence, shared storage only means shared access.

Xprotect defines storage location per camera. This model could also work for frigate but the frigate ui node would require knowledge and permission to access every camera storage location

It can be useful to spread streams across cheap large but slow drives

@kbrouder
Copy link

kbrouder commented Jun 9, 2022

I haven't created a dev environment or examined the code base yet, but isn't the UI already separate? Is the UI separate from the API? Would it be difficult to have a UI that can pull from multiple instances?

I'm probably missing something in the architecture but that would be the cleanest approach to meeting several needs imo. I'm a .net guy and haven't done any web dev with video so I'm no expert by any means. Got too many projects at the moment but at some point I'd love to contribute to this project, I bet this will be solved by someone who knows what they are doing before I find the time.

@stale
Copy link

stale bot commented Jul 9, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request stale
Projects
None yet
Development

No branches or pull requests

5 participants