Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Too slow for simple large list #254

Closed
sixmen opened this issue Feb 14, 2019 · 20 comments · Fixed by #479
Closed

Too slow for simple large list #254

sixmen opened this issue Feb 14, 2019 · 20 comments · Fixed by #479
Assignees
Labels
Community 👨‍👧 Something initiated by a community Discussion 💬 Brainstorm about the idea Enhancement 🆕 New feature or request
Milestone

Comments

@sixmen
Copy link
Contributor

sixmen commented Feb 14, 2019

Is your feature request related to a problem? Please describe.
type-graphql is slower than normal graphql for simple large list.

type-graphql: time = 920

import 'reflect-metadata';
import { ObjectType, Field, Resolver, Query, buildSchemaSync } from 'type-graphql';
import { graphql } from 'graphql';

@ObjectType()
class Item {
  @Field()
  a: number;
  @Field()
  b: number;
  @Field()
  c: number;
  @Field()
  d: number;
  @Field()
  e: number;
}

@Resolver(Item)
class ItemResolver {
  @Query(() => [Item])
  items() {
    return new Array(5000).fill(0).map((v, i) => ({ a: i, b: i, c: i, d: i, e: i }))
  }
}

const schema = buildSchemaSync({
  resolvers: [ItemResolver]
});

const start = Date.now();
graphql(schema, '{ items { a b c d e } }')
  .then((r) => {
    console.log(r.data.items.length);
    console.log(r.data.items[30].c);
    console.log('time =', Date.now() - start);
  });

normal graphql: time = 101

import {
  graphql,
  GraphQLSchema,
  GraphQLObjectType,
  GraphQLString,
  GraphQLList
} from 'graphql';

const Item = new GraphQLObjectType({
  name: 'Item',
  fields: {
    a: { type: GraphQLString },
    b: { type: GraphQLString },
    c: { type: GraphQLString },
    d: { type: GraphQLString },
    e: { type: GraphQLString },
  }
});

var schema = new GraphQLSchema({
  query: new GraphQLObjectType({
    name: 'RootQueryType',
    fields: {
      items: {
        type: GraphQLList(Item),
        resolve() {
          return new Array(5000).fill(0).map((v, i) => ({ a: i, b: i, c: i, d: i, e: i }))
        }
      }
    }
  })
});

const start = Date.now();
graphql(schema, '{ items { a b c d e } }')
  .then((r) => {
    console.log(r.data.items.length);
    console.log(r.data.items[30].c);
    console.log('time =', Date.now() - start);
  });

Describe the solution you'd like
I think some optimizations are needed for simple cases.

If I change createSimpleFieldResolver, it runs faster.

function createSimpleFieldResolver(fieldMetadata) {
    return (root, args, context, info) => root[fieldMetadata.name];
}
@MichalLytek
Copy link
Owner

Note: partially related to #71.

@sixmen
It's a well known fact that abstraction makes things slower.
But your comparison only show super-simple hello world case when you just read properties of generated objects.

If you are interested in making TypeGraphQL faster, you should consider creating more real-life benchmarks, where you call http endpoint, read from databse, write from database, have inputs validation, some auhtorization guard mechanism, middlewares for catching and transforming results, etc. In this more complicated scenario, the difference will be much smaller cause the most of time it will execute arbitrary user code, not the framework internals.

I can make @Field({ simple: true }) decorator option to skip all the middlewares and other framework internals and generates simple property access resolvers if that fixes your real app problem. But I would rather invite you to create some benchmark suites, measure time spend in different TypeGraphQL internals an maybe we found some easy optimizations like skipping transforming object instances or validation of empty args objects, etc.

@MichalLytek MichalLytek added Enhancement 🆕 New feature or request Community 👨‍👧 Something initiated by a community Discussion 💬 Brainstorm about the idea labels Feb 14, 2019
@sixmen
Copy link
Contributor Author

sixmen commented Feb 15, 2019

Actually this is a REAL problem for me.

My program reads data of length 500 from database and returns as list. And no middleware are applied to the fields.

If the DB time is 200ms, type-graphql is still very slow (300ms vs 100ms).

This is a graph when I remove type-graphql runtime (decorators remain, and build schema from decorators at compile time)

2019-02-139 36 04

@sixmen
Copy link
Contributor Author

sixmen commented Feb 15, 2019

For my problem, following shortcuts are enough because I don't have global middlewares.

export function createSimpleFieldResolver(
  fieldMetadata: FieldMetadata,
): GraphQLFieldResolver<any, any, any> {
  const { authChecker, authMode, globalMiddlewares } = BuildContext;
  const middlewares = globalMiddlewares.concat(fieldMetadata.middlewares!);
  applyAuthChecker(middlewares, authMode, authChecker, fieldMetadata.roles);

  return async (root, args, context, info) => {
    const resolverData: ResolverData<any> = { root, args, context, info };
    if (middlewares.length === 0) {
      return root[fieldMetadata.name];
    }
    return await applyMiddlewares(resolverData, middlewares, () => root[fieldMetadata.name]);
  };
}

OR

export function createSimpleFieldResolver(
  fieldMetadata: FieldMetadata,
): GraphQLFieldResolver<any, any, any> {
  const { authChecker, authMode, globalMiddlewares } = BuildContext;
  const middlewares = globalMiddlewares.concat(fieldMetadata.middlewares!);
  applyAuthChecker(middlewares, authMode, authChecker, fieldMetadata.roles);

  if (middlewares.length === 0) {
    return (root, args, context, info) => {
      return root[fieldMetadata.name];
    };
  } else {
    return async (root, args, context, info) => {
      const resolverData: ResolverData<any> = { root, args, context, info };
      return await applyMiddlewares(resolverData, middlewares, () => root[fieldMetadata.name]);
    };
  }
}

If someone has global middlewares, some methods to skip middlewares(like {simple:true}) may be needed.

@MichalLytek
Copy link
Owner

MichalLytek commented Mar 3, 2019

When #183 arrives, I can definitely detect if the resolver has some middlewares or other things attached and then can decide if use an advanced resolver or just the simple one (or even no resolver to let the graphql-js handle it by itself without transforming the parent object and other overheads.

I will take care of that problem after some benchmarks #71.

@MichalLytek MichalLytek added this to the 1.0.0 release milestone Mar 3, 2019
@preetb123
Copy link

What is the resolution for this issue? i have this exact same problem. @MichalLytek

@trubi
Copy link

trubi commented Nov 6, 2019

Note: it's not only slow but it consumes ridiculous amount of memory. In our case, we had to increase memory from 192MB to 2GB to prevent crashing

@MichalLytek
Copy link
Owner

MichalLytek commented Nov 11, 2019

I would inverse control at a higher level. Like @Crhaj mentioned, there can be situations where I need a big load of data, I want to have almost 0 overhead from type-graphql.

It can be marked at Query/Mutation level, maybe via context injection?

ctx.skipMiddlewares();

@theodorDiaconu
So you want dynamic control in runtime, rather than static declarations in decorator options?

What would be the use case when you wan't to strip off authorization or other middlewares?

@theodorDiaconu
Copy link

@MichalLytek I have tested the implementation in this description. We're having big problems. I wanted to further test and play with the example.

The simple graphql returns 50,000 generated docs in 508ms. The type-graphql returns 2000 docs in 752ms. Something is really really off. For 50,000 and type-graphql nodemon crashes.

Security context: 0xec5e8325891 <JSObject>
    1: /* anonymous */ [/Users/theodor/Projects/tests/kaviar/node_modules/tslib/tslib.js:~106] [pc=0x31244a2fd73e](this=0xec5dd184f01 <JSGlobal Object>,resolve=0xec508799a89 <JSFunction (sfi = 0xec53262cd79)>,reject=0xec508799ad1 <JSFunction (sfi = 0xec53262ced9)>)
    2: new Promise(aka Promise)(this=0xec532602311 <the_hole>,0xec5087999a1 <JSFunction (sfi = 0xec598e6eb69)>)
    4: dispatchHandle...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: node::Abort() [/usr/l

It seems that the amount of time grows exponentially not lineraly which leads me to think there's something else going on here.

@theodorDiaconu
Copy link

Did some research and it seems it's related to the createSimpleResolver and the awaiter function from typescript:

    // return (root, args, context, info) => root[fieldMetadata.name];
    return (root, args, context, info) => tslib_1.__awaiter(this, void 0, void 0, function* () {
        const resolverData = { root, args, context, info };
        return yield root[fieldMetadata.name];
        // return yield helpers_1.applyMiddlewares(container, resolverData, middlewares, () => root[fieldMetadata.name]);
    });

As it turns out the lib awaiter was heavy, just modifying to es2017 as target to use the native async await. We just got 2x improvement. The 5000 example now takes 500ms instead 1000ms.

@MichalLytek
Copy link
Owner

@theodorDiaconu
Can I ask you to tackle #71? You can setup a few scenarios like simple server, returning deeply nested data with array, returning big data (like in this issue) JSON, maybe also some scenario of using features like authorization, middlewares, custom decorators, dependency injection vs. manual way with apollo-server.

This would allow us to work on the performance problem together rather than "fixing" code without knowing the real influence on this 😉

@Lurk
Copy link

Lurk commented Nov 25, 2019

Can confirm that compiling with higher target makes things faster ~2x
Query with a lot of data:
node 12
target es2016
execution time ~ 8 sec

node 12
target es2018
execution time ~ 3 sec

also fixes #449
target es2016: After week node process consumes ~500 mb ram
target es2018: After week node process consumes ~65 mb ram

@MichalLytek
Copy link
Owner

As Node 8 is going out of support, we can bump to 10.3 and use ES2018 features 💪
Will try to make this change as a part of 0.18 release as it's a breaking change 😉

@MichalLytek
Copy link
Owner

I've created a simple benchmark which showed that it speeds up the execution of simple object returning from 4831 ms to 2196 ms, and deeply nested object from 17067 ms to 7311 ms! 💪

But it's still 2196 ms vs 871 ms of graphql-js, as well as 7311 ms vs 2206 ms 😞

Will check the behavior for big arrays of object and then add an option to disable TypeGraphQL resolver stack in favor of direct object field access 😉

@MichalLytek
Copy link
Owner

Second discover - Node 13 is really fast! 🚀
Without any changes in the code (only ES2018 build), it drops execution time from 2.196s to 1.622s and from 7.311s to just 4.557s!

Also, using default resolvers for object fields increase the speed for nested objects from 4.557s to 3.086s, which is really close to 2.422s of graphql-js 💪

Now after confirmation I can work on API to enable that optimization 😉

@MichalLytek
Copy link
Owner

@Lurk @theodorDiaconu @trubi @preetb123 @sixmen
Guys, please check the newest v0.18.0-beta.7 🎉
The best would be Node.js 13 I think, maybe 12 also is fast with async/await 🤔

@holzerch
Copy link

holzerch commented Dec 6, 2019

@MichalLytek thanks for your work here in improving the performance. In our project we ran into massive performance problems with your library. Only 3 request per seconds will max out a single node instance with 220% CPU usage (2017 MacBook Pro). The related query consists of a very complex and big tree of objects with total size of about 320 kb. The new beta version already improves the situation a lot. However it is very cumbersome to add this annotation to all object types. Since we currently don't have any middleware at all we would would much appreciate a global switch for the behavior. Or even better that the framework detects there is no middleware and skips the overhead.

@MichalLytek
Copy link
Owner

I agree but it's not so simple to do this detection right now, so please use the workaround for now.

@theodorDiaconu
Copy link

Good job @MichalLytek on the performance improvements. I stopped iterating on it. I decided for now to stay away from any "Abstraction Layer" on top of GraphQL official schema. This might change in the future, as your library is the best out there, it's properly maintained and it's clear that you care about it.

Another concern that I expressed both on gitter is that you are the sole developer. A good developer nonetheless, but I cannot base a medical enterprise application on a framework maintained by a SINGLE developer, way to risky. GraphQL is moving fast, and most likely this isn't your full time job.

Try to think of a tooling that you can offer to assist developing your framework and sell it, or write a book, or try to find some case-studies from successful projects using this, get some insight, and you can also assemble a team and leverage that to offer paid enterprise support and training. Then, people like me who are scared as **** of code-base abandonment will no longer think this.

@sixmen
Copy link
Contributor Author

sixmen commented Dec 6, 2019

I checked the beta, and the issue I created is resolved.
(I used the forked version for a while - https://github.com/croquiscom/type-graphql/commits/croquis-190523 -, and I'll apply the way you worked)

But I found another problem after I created the issue.
It is not critical for me like this issue, but it is still a problem.

So, I made another issue #487, please check it.

@MichalLytek
Copy link
Owner

MichalLytek commented Dec 22, 2019

@theodorDiaconu

I cannot base a medical enterprise application on a framework maintained by a SINGLE developer, way to risky

I understand your point of view as for enterprise grade products you need a solid, stable and well supported products. However, be aware that "official" graphql-js is also maintained by Ivan Goncharov alone 😉

Try to think of a tooling that you can offer to assist developing your framework and sell it, or write a book, or try to find some case-studies from successful projects using this, get some insight, and you can also assemble a team and leverage that to offer paid enterprise support and training.

I have some sponsorship from companies, I have backers on OpenCollective so I hope things will move forward into less hobbyist, more stable part-time maintaining 😉

I don't want to go into a paid plan, especially not now when TypeGraphQL just started adopting in GraphQL community. For workshops and training it also might be too early but I am always open for any request. Maybe a paid course on a learning platform would be a good fit I think 🤔

Also, in Gold Sponsors tier, you will "get a premium technical support from our core contributors", so if you only need a support with solving problems, fixing bugs or adding new features in TypeGraphQL for your enterprise app, and you are able to pay for this, it's not a problem as I run my own business and can send an invoice for that 😉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Community 👨‍👧 Something initiated by a community Discussion 💬 Brainstorm about the idea Enhancement 🆕 New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants