Skip to content
This repository has been archived by the owner on Aug 31, 2018. It is now read-only.

please develop native threads that can load native modules with require and share/lock objects #31

Open
shimondoodkin opened this issue Aug 26, 2017 · 16 comments

Comments

@shimondoodkin
Copy link

  • Version:
  • Platform:
  • Subsystem:
@Fishrock123
Copy link
Contributor

Will you fund the development of this? 😉

Seriously though this would be a gigantic undertaking.

@zkat
Copy link
Contributor

zkat commented Aug 26, 2017

I'm personally against adding shared-memory threading to Node. I think having stuff like lightweight WebWorkers is pretty alright, and probably requires a lot less work -- but the implications on adding shared-memory threading to a language that is woefully unprepared for it would be... A ton of work for something not very great which is not that much better than existing alternatives.

Please remember that Node, through libuv, already parallelizes a number of I/O-related tasks at the C++ level, and does it without compromising the thread safety of user-level code. I believe this is already hugely beneficial to a platform like Node and adds to the pile of reasons to think more complicated threading will just result in minimal benefits for the user.

Please keep in mind that child process workers are relatively easy to spawn, and often serve the use cases in question well enough.

@shimondoodkin
Copy link
Author

shimondoodkin commented Aug 26, 2017 via email

@shimondoodkin
Copy link
Author

You can close this if you think it is unreasonable. I'm ok with it.

@Qard
Copy link
Member

Qard commented Aug 26, 2017

I think for encoding/decoding overhead, you'd get more mileage just using streamable encoders and decoders. The marshalling and unmarshalling required to take a block of JS data and pass it through native code to another thread would most likely outweigh the existing encoding/decoding overhead anyway...

@Fishrock123
Copy link
Contributor

There are some areas that threading, or rather, full or partial workers would be useful in.

Partial, or rather, computation (non I/O access) workers are probably reasonable to implement and would be fairly useful for say, off-thread template rendering.

@zkat
Copy link
Contributor

zkat commented Aug 27, 2017

I'm pretty onboard with some sort of native WebWorker thing being added. Those could actually be super handy, and they're shared-nothing so 💯

@shimondoodkin
Copy link
Author

shimondoodkin commented Aug 27, 2017

as one of the solutions to the problem I had I thought it might be useful to develop a module of shared memory object or array.
like to take simple c++11 classes of JSON object and wrap it with v8
and instantiate this object as a kind of a shared smart pointer. (not sure if this helps)

if the threads are different processes this shared object should be named on shm.
if it is a thread then it could be just shared pointer with locks on access.

I meant sharing c++ objects with accessors (good enough for use). maybe that could be simple.
maybe to serialize but only what is accessed. not the whole object.

@Qard
Copy link
Member

Qard commented Aug 27, 2017

It's not quite so simple. Sharing V8 objects between different isolates is not safe, so any data you want to move around needs to be marshalled and unmarshalled at the isolate barriers, which has rather substantial overhead. The V8 serialization API is exposed in recent versions of node, which helps, but there's still a noticeable performance penalty. https://nodejs.org/api/v8.html#v8_serialization_api

@ayojs ayojs deleted a comment from jbustillos Aug 27, 2017
@benjamingr
Copy link
Contributor

@petkaantonov wrote a pull request for Node.js but got burned out on it in the middle.

You can take a look at petkaantonov/io.js@ea143f7

@addaleax
Copy link
Contributor

addaleax commented Aug 29, 2017

Fwiw, I'm still planning on working on this in the near future, with the goal of getting something along the lines of Petka's work going.

Also, re: shared memory -- I think it's not what @zkat was referring to, and I don't exactly have much background knowledge, but I think enabling ShareArrayBuffers should be doable.

(Like others said, shared memory access to JS objects is pretty much out of the question, for multiple reasons.)

EDIT: I'd say, expect a PR by around this weekend :)

@tagkiller
Copy link

Hi all,

Sorry in advance if I'm mistaken, but can't the work of webkit on threading be useful to achieve the goal of this issue ?
As V8 is historically sharing the same base of Webkit, it could be worth to look that way.

Here is a link of what I'm referring to : https://webkit.org/blog/7846/concurrent-javascript-it-can-work/

Regards,

@benjamingr
Copy link
Contributor

benjamingr commented Aug 31, 2017

@tagkiller - we could do this two years ago nodejs/node#1159 - the collaborator got tired of not having enough feedback from core and it eventually got closed as nodejs/node#2133

This is still entirely possible regardless of WebKit.

@Qard
Copy link
Member

Qard commented Aug 31, 2017

Yeah, I should clarify my comments above: threading is definitely possible, and there's been a bunch of work on that in the past. Threading for the purpose of offloading JSON serialization/deserialization however, likely has little benefit due to overhead of moving data between threads in a safe way.

@brodycj
Copy link
Contributor

brodycj commented Sep 18, 2017

I have a couple questions:

I would personally vote to land #58 soon for a couple reasons:

  • Start to get the feature better tested, reviewed, and hardened by the user community
  • Put Ayo.js on the map, at a technical level

@TimothyGu
Copy link
Member

How soon should we expect the initial worker implementation in #58 to land?

First of all, remember we are all volunteers here. Nobody that I know of is paying @addaleax to write Worker, and as such there is no deadline attached.

That said, I personally expect that it'll still be a while before it gets landed. Much code review is needed, and as of now there hasn't been much of it.

What kind of prospects do we have of getting this enhancement into main Node.js or perhaps even node-chakracore?

Technically there is no barrier for this to be merged into Node.js, as our code has not yet diverged from Node.js'. However, it's up to @addaleax if she would like to take the time to upstream this and go through their review process (see #40 (comment)). If not, someone else would have to.

I am not familiar with node-chakracore, but from what I know they will have to at least do some work to support certain features (like ArrayBuffer transfers) in their V8-to-ChakraCore API bridge (chakrashim), or maybe even ChakraCore itself.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants