Skip to content
Maxime Rouiller edited this page Apr 20, 2015 · 2 revisions

AtomEventStore supports as many independent event streams as the underlying storage mechanism can handle.

  • When using the file-based storage, the support is ultimately limited by how many files and folders can be stored under a single root folder.
  • When using Microsoft Azure BLOB storage, the support is ultimately limited by the number and size of BLOBs that a single storage container can hold.
  • When using the memory-based storage, the support is ultimately limited by the available memory of the client machine.

When writing events to storage, each entry is appended to a document with a configurable page size. When the page is full, a new page is created, which guarantees that a write operation is never more costly than when the page size is reached.

During reads, events are supplied via an Iterator over the pages in the underlying Atom Feed. The Iterator itself never reads more than two Atom Feed pages at a time, and each page is limited in size by the configurable page size. This means that the Iterator can enumerate even very long event streams. If a client of such a lazily enumerated sequence can adequately deal with long event streams (e.g. by aggregating the events instead of keeping each event in memory), it can scale to very long event streams.

Note: That AtomEventStore is designed to be scalable doesn't necessarily imply that it's fast. While it's designed to be reasonably efficient, where design-tradeoff were necessary, scalability have been prioritized over speed.

Clone this wiki locally