Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cross-actor, shm, (tick) ring buffer #107

Open
Tracked by #339
goodboy opened this issue Aug 20, 2020 · 3 comments
Open
Tracked by #339

Cross-actor, shm, (tick) ring buffer #107

goodboy opened this issue Aug 20, 2020 · 3 comments
Labels
data-layer real-time and historical data processing and storage feature-request New feature or request integration external stack and/or lib augmentations

Comments

@goodboy
Copy link
Contributor

goodboy commented Aug 20, 2020

As part of the fsp subsystem design we're likely going to want to implement some shared memory systems for low latency multi-time frame calcs. I've started a couple design ideas about this in #98 but I do think for sure part of this system will require a fast ring buffer for numpy data:

Here's a starter list of projects to try after some very brief searching


committed to long ago

committed recently
  • 2020: pyring pure python impl and has a multi-proc c-types example
  • 2020: ringbuf looks to be built for real-time audio but is single producer/consumer (which is actually fine for our use case afaik) but is built as a cython wrapper around boost (ala C++) libs; numpy example looks good
  • 2021: redo of ringbuf in rust targeting some missing features with ctypes wrapping/integration for python
  • 2020: circular_buffer_numpy seems kinda like a simple wrapper around an array with "pointer" index references kinda like what i've rolled many times before.

futher resources

It may just end up we take a lang from #106 and implement one or try out some designs from above on the new SharedMemory type which has an example using a numpy array. An example wrapper for an older version of this from the scipy cookbooks is here.


disruptor style

The "lurkers" said LMAX already did this best with their disruptor project. I think we can make a very cool variant of this with actors, numpy and numba:

  • slickin paper
  • pump vid
  • there's more resources on the disruptor link 🏄
  • data_pipe is another one to check out (though not sure it'll have numpy support at all).

As always, lurkers please chime.

@goodboy goodboy added feature-request New feature or request help wanted integration external stack and/or lib augmentations data-layer real-time and historical data processing and storage labels Aug 20, 2020
@goodboy
Copy link
Contributor Author

goodboy commented Aug 21, 2020

The "lurkers" said LMAX already did this best with their disruptor project:

  • slickin paper
  • pump vid
  • there's more resources on the disruptor link 🏄

I think we can make a very cool variant of this with actors, numpy and numba.

goodboy added a commit that referenced this issue Sep 16, 2020
This adds a shared memory "incrementing array" sub-sys interface
for single writer, multi-reader style data passing. The main motivation
is to avoid multiple copies of the same `numpy` array across actors
(plus now we can start being fancy like ray).

There still seems to be some odd issues with the "resource tracker"
complaining at teardown (likely partially to do with SIGINT stuff) so
some further digging in the stdlib code is likely coming.

Pertains to #107 and #98
goodboy added a commit that referenced this issue Sep 26, 2020
This adds a shared memory "incrementing array" sub-sys interface
for single writer, multi-reader style data passing. The main motivation
is to avoid multiple copies of the same `numpy` array across actors
(plus now we can start being fancy like ray).

There still seems to be some odd issues with the "resource tracker"
complaining at teardown (likely partially to do with SIGINT stuff) so
some further digging in the stdlib code is likely coming.

Pertains to #107 and #98
goodboy added a commit that referenced this issue Sep 29, 2020
This adds a shared memory "incrementing array" sub-sys interface
for single writer, multi-reader style data passing. The main motivation
is to avoid multiple copies of the same `numpy` array across actors
(plus now we can start being fancy like ray).

There still seems to be some odd issues with the "resource tracker"
complaining at teardown (likely partially to do with SIGINT stuff) so
some further digging in the stdlib code is likely coming.

Pertains to #107 and #98
goodboy added a commit that referenced this issue Oct 2, 2020
This adds a shared memory "incrementing array" sub-sys interface
for single writer, multi-reader style data passing. The main motivation
is to avoid multiple copies of the same `numpy` array across actors
(plus now we can start being fancy like ray).

There still seems to be some odd issues with the "resource tracker"
complaining at teardown (likely partially to do with SIGINT stuff) so
some further digging in the stdlib code is likely coming.

Pertains to #107 and #98
@goodboy
Copy link
Contributor Author

goodboy commented Feb 25, 2021

data_pipe is another one to check out (though not sure it'll have numpy support at all).

Looks to imply it has some disruptor style examples.

@goodboy
Copy link
Contributor Author

goodboy commented Jun 2, 2021

This definitely can tie in with #192 @guilledk

goodboy added a commit that referenced this issue Jun 15, 2021
Adding binance's "hft" ws feeds has resulted in a lot of context
switching in our Qt charts, so much so it's chewin CPU and definitely
worth it to throttle to the detected display rate as per discussion in
issue #192.

This is a first very very naive attempt at throttling L1 tick feeds on
the `brokerd` end (producer side) using a constant and uniform delivery
rate by way of a `trio` task + mem chan.  The new func is
`data._sampling.uniform_rate_send()`. Basically if a client request
a feed and provides a throttle rate we just spawn a task and queue up
ticks until approximately the next display rate's worth period of time
has passed before forwarding. It's definitely nothing fancy but does
provide fodder and a start point for an up and coming queueing eng to
start digging into both #107 and #109 ;)
@goodboy goodboy changed the title Numpy compatible shared mem ring buffer Cross-actor, shared mem, (tick) ring buffer Mar 6, 2023
This was referenced Jun 27, 2023
@goodboy goodboy changed the title Cross-actor, shared mem, (tick) ring buffer Cross-actor, shm, (tick) ring buffer Aug 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
data-layer real-time and historical data processing and storage feature-request New feature or request integration external stack and/or lib augmentations
Projects
None yet
Development

No branches or pull requests

1 participant