-
-
Notifications
You must be signed in to change notification settings - Fork 35.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Alternative to .onBeforeCompile for material extension #14232
Comments
Related: #14231 My observation from these is that overall these kind of modifications to the built-in materials are not advised, and seen as an anti-pattern over both currently available alternatives and future proposed ones (see #14098 specifically, where template copying and casting to Are you familiar with |
I find this proposal too restrictive. #14231 proposes something which may possibly be a bit more flexible and leave this kind of warning to whomever implements some feature using some extension API. Three.js throws a lot of warnings as is. I've mustered up this demo with #14231: It shows how to create a simple sub class of #14231 takes a very minimalist approach, it is aiming to touch the least amount of files, and the least amount of lines, while being in compliance with the linter and legible. I imagine a much more powerful interface could be conceived if it had the privilege to introduce it's own classes like you're proposing:
|
I think your approach is valid. Although i don't like the "zoo" of include sources. I would have preferred an include library, which may cascade internally and have overrides, but keep that away from the encapsulating logic. The overall problem that I see with Maybe a title " Node-based approach is solid, in my eyes, but the implementation leaves everything else to be desired, in my eyes. If you want to have a decent node-based system, you need a solid build pipeline, which I didn't find. I think, following your approach is better, provided a chunk library is encapsulated. Something like: class Material{
constructor(){
this.chunkLibrary = new ShaderChunkLibrary({parent: THREE.DefaultChunkLibrary});
}
...
} and then you could do: myMaterial.chunkLibrary.set('chunky-hippo','hippo = chunky');
myMaterial.chunkLibrary.set('begin_vertex',
`float _ind = aIndex + uTime;
transformed *= 1. + 0.5 * sin(${frequency}.0 * ${Math.PI} * _ind );`
); |
I'd land the includes as is, without introducing new classes. I took extreme care to keep the amount of code change to a minimum, since, it's been impossible so far to convince anyone that this may be useful, and that it's completely opt in. All these PRs have been blocked by Could you possibly voice some thoughts in the other PRs? |
Coming from the context of just "material extension" (as it's stated in the title), please do not limit yourself to just #7522, it's only one of the proposals to achieve this. Please, look at the other PRs plural. Thank you :)
There's no reason not to build one with #14098, but my opinion is that the user should not be limited to one. They should be allowed to use whichever library they choose, and be allowed to use the language/environment (javascript) to achieve so. |
I have spent some hours reading through some of the topics you have linked. It's a bit more than what I have time for currently and I do not wish to comment without have a good grasp on the conversation.
I think there's a misunderstanding. I do not mean a
As far as "one" goes - I believe that's a misunderstanding too, I was not clear enough I believe. What I have sketched out is a cascading structure, where a library may have a parent and when resolving a snippet by reference - parent is to be queries if library itself doesn't contain the reference. Think css. You could have multiple parents too, if that would make sense. |
I don't quite understand it still. I didn't mean a javascript library either, but if a dictionary is suitable for this, i meant you should be able to assemble that dictionary yourself (or just reference it from somewhere). We could continue this in #14098, since it does seem to apply and doesn't have to be as generic of a discussion. |
@pailhead I think we're on the same page. |
I believe we are. My confidence has been shaken up by the many so many rejections, but this helps :) You've done the courtesy to @donmccurdy and commented on #7522 API, where I believe we're also on the same page. Can you do me the courtesy and comment on #14245, or even #7522? When a single user shows up and asks for an arguably obscure feature, it get's implemented over 13 core When many other users suggest PR's or raise issues, it is blocked by #7522. I want to draw a diagram of processes and dependencies to try to illustrate better what i see as a problem here, it seems to be the only way. In lieu of that for now, i can try to explain it. Let's consider 10 different projects/apps/examples/effects that use features that are today not available in core nor in Out of these 10 features, five can be coded on top of three.js, and submitted to ^ this is a good thing. All it takes is to deem the example useful ("cool, people would want to do this") and since it's just new files being added to The other five though, cannot be coded on top of three.js and require some core changes. Adding five different PR's that touch renderers, shaders etc. would carry a tremendous amount of overhead. Based on historical data, these five would take longer to land than the first two. A perfect example of this can be seen here. The overhead starts in #14139 and continues in #14239
This problem
With the looming refactor from #7522 i fail to see how further coupling I think i struggle with understanding how stuff in With a static system like the one proposed specifically in #14231 - #14239 becomes this: myMaterial.shaderUniforms.object_space_normal_normalMatrix = { value: myObject.normalMatrix, stage: 'fragment', type: 'mat3' }
myMaterial.shaderIncludes.normal_fragment_maps = `
normal = texture2D( normalMap, vUv ).xyz * 2.0 - 1.0;
#ifdef FLIP_SIDED
normal = - normal;
#endif
#ifdef DOUBLE_SIDED
normal = normal * ( float( gl_FrontFacing ) * 2.0 - 1.0 );
#endif
normal = normalize( object_space_normal_normalMatrix * normal );
` |
With a system proposed in this issue, new shader features could be "staged" or tested in
Would allow me to opt in. |
@mrdoob I believe that |
@Usnul from your comments in #7522 (comment) (thank you for writing those by the way!) I assumed your worry was mainly about the internal structure and code clarity of NodeMaterial, and that you weren't concerned about — or haven't had time to consider — its functionality or readiness for use. Is that accurate? I would like to think we can have NodeMaterial in good shape within months, not years. On that timescale, a general shader transform system would be unnecessary. |
Yeah, adding that method was a mistake on my part. But sometimes you need to experiment until finding the right solution...
We do what we can. |
It might be worth giving some insight into what's in store for |
@donmccurdy wrt @mrdoob @pailhead |
Is there a known timeline for this? It's hard to follow |
No. Sorry. |
I didn't catch this before:
This is a paradox and cannot happen since it already has been several years of development. Unless we don't count the years past since 2015 :) |
I meant to say within months from this discussion, not months from development beginning in ~2015. |
Thanks, that makes sense, so anywhere between 2-11 months is what we're hoping for 🙂 |
Current approach of letting the user do whatever they want with the shader code from within
.onBeforeCompile
is very powerful, and yet, at the same time, quite messy and fragile.One of the biggest issues is dynamic nature of the material code, if you allow making arbitrary modification to the code - you can not assume that material code will remained the same between builds, and you have to construct that code string every time to check, this creates waste both in terms of CPU cycles and in terms of GC overhead, as that string will likely end up as garbage.
My proposal is quite simple. I propose having more restrictive transformers which can be registered onto a material. Transformers can be entirely static for most usecases, and where that's not enough, we could offer dynamic transformers - with the main difference that the library can detect that and optimize usecases where only static transformers are being used. One fairly substantial benefit also - is the added semantic information which library can use for various optimizations and error checking.
My observation boils down to the fact that most
.onBeforeCompile
functions do just 1 thing:Here is an example to illustrate current usage:
What i propose would be:
Since there is no arbitrary code being executed anymore, we know that material shader does not change as long as list of transforms hasn't changed (in addition to other things, like defines and lighting which we already have had).
A couple of ideas on top of this proposal:
The text was updated successfully, but these errors were encountered: