From fca1c5d9ecd5e682609015f7d7cbfe459ac4810c Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 18 Aug 2022 17:21:00 -0400 Subject: [PATCH 01/65] Introduce @defer and @stream. Update Section 3 -- Type System.md Update Section 3 -- Type System.md Update Section 3 -- Type System.md Update Section 6 -- Execution.md Update Section 6 -- Execution.md Update Section 6 -- Execution.md Update Section 6 -- Execution.md Update Section 6 -- Execution.md Update Section 6 -- Execution.md Update Section 6 -- Execution.md Update Section 6 -- Execution.md Update Section 6 -- Execution.md Amend changes change initial_count to initialCount add payload fields to Response section add stream validation for overlapping fields spelling updates add note about re-execution add note about final payloads label is optional fix build Update ExecuteQuery with hasNext logic fix spelling fix spaces Update execution to add defer/stream to mutations and subscriptions clarify stream records Apply suggestions from code review Co-authored-by: Benjie Gillam missing bracket Update spec/Section 7 -- Response.md Co-authored-by: Benjie Gillam clarify line about stream record iterator update visitedFragments with defer Updates to consolidate subsequent payload logic for queries, mutations, and subscriptions Apply suggestions from code review Co-authored-by: Benjie Gillam address review feedback Add handling of termination signal more formatting fix spelling Add assertion for record type add "Stream Directives Are Used On List Fields" validation rule Add defaultValue to @stream initialCount Update spec/Section 5 -- Validation.md Co-authored-by: Benjie Gillam # Conflicts: # spec/Section 3 -- Type System.md # spec/Section 5 -- Validation.md # spec/Section 6 -- Execution.md # spec/Section 7 -- Response.md --- cspell.yml | 2 + spec/Section 3 -- Type System.md | 63 +++++- spec/Section 5 -- Validation.md | 37 ++++ spec/Section 6 -- Execution.md | 334 +++++++++++++++++++++++++++---- spec/Section 7 -- Response.md | 67 +++++-- 5 files changed, 451 insertions(+), 52 deletions(-) diff --git a/cspell.yml b/cspell.yml index e8aa73355..66902e41a 100644 --- a/cspell.yml +++ b/cspell.yml @@ -4,12 +4,14 @@ ignoreRegExpList: - /[a-z]{2,}'s/ words: # Terms of art + - deprioritization - endianness - interoperation - monospace - openwebfoundation - parallelization - structs + - sublist - subselection # Fictional characters / examples - alderaan diff --git a/spec/Section 3 -- Type System.md b/spec/Section 3 -- Type System.md index 7c116bf81..dec2cff30 100644 --- a/spec/Section 3 -- Type System.md +++ b/spec/Section 3 -- Type System.md @@ -758,8 +758,9 @@ And will yield the subset of each object type queried: When querying an Object, the resulting mapping of fields are conceptually ordered in the same order in which they were encountered during execution, excluding fragments for which the type does not apply and fields or fragments -that are skipped via `@skip` or `@include` directives. This ordering is -correctly produced when using the {CollectFields()} algorithm. +that are skipped via `@skip` or `@include` directives or temporarily skipped via +`@defer`. This ordering is correctly produced when using the {CollectFields()} +algorithm. Response serialization formats capable of representing ordered maps should maintain this ordering. Serialization formats which can only represent unordered @@ -1901,6 +1902,11 @@ by a validator, executor, or client tool such as a code generator. GraphQL implementations should provide the `@skip` and `@include` directives. +GraphQL implementations are not required to implement the `@defer` and `@stream` +directives. If they are implemented, they must be implemented according to this +specification. GraphQL implementations that do not support these directives must +not make them available via introspection. + GraphQL implementations that support the type system definition language must provide the `@deprecated` directive if representing deprecated portions of the schema. @@ -2116,3 +2122,56 @@ to the relevant IETF specification. ```graphql example scalar UUID @specifiedBy(url: "https://tools.ietf.org/html/rfc4122") ``` + +### @defer + +```graphql +directive @defer( + label: String + if: Boolean +) on FRAGMENT_SPREAD | INLINE_FRAGMENT +``` + +The `@defer` directive may be provided for fragment spreads and inline fragments +to inform the executor to delay the execution of the current fragment to +indicate deprioritization of the current fragment. A query with `@defer` +directive will cause the request to potentially return multiple responses, where +non-deferred data is delivered in the initial response and data deferred is +delivered in a subsequent response. `@include` and `@skip` take precedence over +`@defer`. + +```graphql example +query myQuery($shouldDefer: Boolean) { + user { + name + ...someFragment @defer(label: 'someLabel', if: $shouldDefer) + } +} +fragment someFragment on User { + id + profile_picture { + uri + } +} +``` + +### @stream + +```graphql +directive @stream(label: String, initialCount: Int = 0, if: Boolean) on FIELD +``` + +The `@stream` directive may be provided for a field of `List` type so that the +backend can leverage technology such as asynchronous iterators to provide a +partial list in the initial response, and additional list items in subsequent +responses. `@include` and `@skip` take precedence over `@stream`. + +```graphql example +query myQuery($shouldStream: Boolean) { + user { + friends(first: 10) { + nodes @stream(label: "friendsStream", initialCount: 5, if: $shouldStream) + } + } +} +``` diff --git a/spec/Section 5 -- Validation.md b/spec/Section 5 -- Validation.md index dceec126b..0c1a06c80 100644 --- a/spec/Section 5 -- Validation.md +++ b/spec/Section 5 -- Validation.md @@ -422,6 +422,7 @@ FieldsInSetCanMerge(set): {set} including visiting fragments and inline fragments. - Given each pair of members {fieldA} and {fieldB} in {fieldsForName}: - {SameResponseShape(fieldA, fieldB)} must be true. + - {SameStreamDirective(fieldA, fieldB)} must be true. - If the parent types of {fieldA} and {fieldB} are equal or if either is not an Object Type: - {fieldA} and {fieldB} must have identical field names. @@ -455,6 +456,16 @@ SameResponseShape(fieldA, fieldB): - If {SameResponseShape(subfieldA, subfieldB)} is false, return false. - Return true. +SameStreamDirective(fieldA, fieldB): + +- If neither {fieldA} nor {fieldB} has a directive named `stream`. + - Return true. +- If both {fieldA} and {fieldB} have a directive named `stream`. + - Let {streamA} be the directive named `stream` on {fieldA}. + - Let {streamB} be the directive named `stream` on {fieldB}. + - If {streamA} and {streamB} have identical sets of arguments, return true. +- Return false. + **Explanatory Text** If multiple field selections with the same response names are encountered during @@ -1517,6 +1528,32 @@ query ($foo: Boolean = true, $bar: Boolean = false) { } ``` +### Stream Directives Are Used On List Fields + +**Formal Specification** + +- For every {directive} in a document. +- Let {directiveName} be the name of {directive}. +- If {directiveName} is "stream": + - Let {adjacent} be the AST node the directive affects. + - {adjacent} must be a List type. + +**Explanatory Text** + +GraphQL directive locations do not provide enough granularity to distinguish the +type of fields used in a GraphQL document. Since the stream directive is only +valid on list fields, an additional validation rule must be used to ensure it is +used correctly. + +For example, the following document will only pass validation if `field` is +defined as a List type in the associated schema. + +```graphql counter-example +query { + field @stream(initialCount: 0) +} +``` + ## Variables ### Variable Uniqueness diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 28862ea89..0bd2ed084 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -30,13 +30,21 @@ request is determined by the result of executing this operation according to the "Executing Operations” section below. ExecuteRequest(schema, document, operationName, variableValues, initialValue): +Note: the execution assumes implementing language supports coroutines. +Alternatively, the socket can provide a write buffer pointer to allow +{ExecuteRequest()} to directly write payloads into the buffer. - Let {operation} be the result of {GetOperation(document, operationName)}. - Let {coercedVariableValues} be the result of {CoerceVariableValues(schema, operation, variableValues)}. - If {operation} is a query operation: - - Return {ExecuteQuery(operation, schema, coercedVariableValues, - initialValue)}. + - Let {executionResult} be the result of calling {ExecuteQuery(operation, + schema, coercedVariableValues, initialValue, subsequentPayloads)}. + - If {executionResult} is an iterator: + - For each {payload} in {executionResult}: + - Yield {payload}. + - Otherwise: + - Return {executionResult}. - Otherwise if {operation} is a mutation operation: - Return {ExecuteMutation(operation, schema, coercedVariableValues, initialValue)}. @@ -128,15 +136,28 @@ An initial value may be provided when executing a query operation. ExecuteQuery(query, schema, variableValues, initialValue): +- Let {subsequentPayloads} be an empty list. - Let {queryType} be the root Query type in {schema}. - Assert: {queryType} is an Object type. - Let {selectionSet} be the top level Selection Set in {query}. - Let {data} be the result of running {ExecuteSelectionSet(selectionSet, - queryType, initialValue, variableValues)} _normally_ (allowing - parallelization). + queryType, initialValue, variableValues, subsequentPayloads)} _normally_ + (allowing parallelization). - Let {errors} be the list of all _field error_ raised while executing the selection set. -- Return an unordered map containing {data} and {errors}. +- If {subsequentPayloads} is empty: + - Return an unordered map containing {data} and {errors}. +- If {subsequentPayloads} is not empty: + - Yield an unordered map containing {data}, {errors}, and an entry named + {hasNext} with the value {true}. + - Let {iterator} be the result of running + {YieldSubsequentPayloads(subsequentPayloads)}. + - For each {payload} yielded by {iterator}: + - If a termination signal is received: + - Send a termination signal to {iterator}. + - Return. + - Otherwise: + - Yield {payload}. ### Mutation @@ -150,14 +171,27 @@ mutations ensures against race conditions during these side-effects. ExecuteMutation(mutation, schema, variableValues, initialValue): +- Let {subsequentPayloads} be an empty list. - Let {mutationType} be the root Mutation type in {schema}. - Assert: {mutationType} is an Object type. - Let {selectionSet} be the top level Selection Set in {mutation}. - Let {data} be the result of running {ExecuteSelectionSet(selectionSet, - mutationType, initialValue, variableValues)} _serially_. + mutationType, initialValue, variableValues, subsequentPayloads)} _serially_. - Let {errors} be the list of all _field error_ raised while executing the selection set. -- Return an unordered map containing {data} and {errors}. +- If {subsequentPayloads} is empty: + - Return an unordered map containing {data} and {errors}. +- If {subsequentPayloads} is not empty: + - Yield an unordered map containing {data}, {errors}, and an entry named + {hasNext} with the value {true}. + - Let {iterator} be the result of running + {YieldSubsequentPayloads(subsequentPayloads)}. + - For each {payload} yielded by {iterator}: + - If a termination signal is received: + - Send a termination signal to {iterator}. + - Return. + - Otherwise: + - Yield {payload}. ### Subscription @@ -291,22 +325,36 @@ MapSourceToResponseEvent(sourceStream, subscription, schema, variableValues): - Return a new event stream {responseStream} which yields events as follows: - For each {event} on {sourceStream}: - - Let {response} be the result of running + - Let {executionResult} be the result of running {ExecuteSubscriptionEvent(subscription, schema, variableValues, event)}. - - Yield an event containing {response}. + - For each {response} yielded by {executionResult}: + - Yield an event containing {response}. - When {responseStream} completes: complete this event stream. ExecuteSubscriptionEvent(subscription, schema, variableValues, initialValue): +- Let {subsequentPayloads} be an empty list. - Let {subscriptionType} be the root Subscription type in {schema}. - Assert: {subscriptionType} is an Object type. - Let {selectionSet} be the top level Selection Set in {subscription}. - Let {data} be the result of running {ExecuteSelectionSet(selectionSet, - subscriptionType, initialValue, variableValues)} _normally_ (allowing - parallelization). + subscriptionType, initialValue, variableValues, subsequentPayloads)} + _normally_ (allowing parallelization). - Let {errors} be the list of all _field error_ raised while executing the selection set. -- Return an unordered map containing {data} and {errors}. +- If {subsequentPayloads} is empty: + - Return an unordered map containing {data} and {errors}. +- If {subsequentPayloads} is not empty: + - Yield an unordered map containing {data}, {errors}, and an entry named + {hasNext} with the value {true}. + - Let {iterator} be the result of running + {YieldSubsequentPayloads(subsequentPayloads)}. + - For each {payload} yielded by {iterator}: + - If a termination signal is received: + - Send a termination signal to {iterator}. + - Return. + - Otherwise: + - Yield {payload}. Note: The {ExecuteSubscriptionEvent()} algorithm is intentionally similar to {ExecuteQuery()} since this is how each event result is produced. @@ -322,6 +370,44 @@ Unsubscribe(responseStream): - Cancel {responseStream} +## Yield Subsequent Payloads + +If an operation contains subsequent payload records resulting from `@stream` or +`@defer` directives, the {YieldSubsequentPayloads} algorithm defines how the +payloads should be processed. + +YieldSubsequentPayloads(subsequentPayloads): + +- While {subsequentPayloads} is not empty: +- If a termination signal is received: + - For each {record} in {subsequentPayloads}: + - If {record} is a Stream Record: + - Let {iterator} be the correspondent fields on the Stream Record + structure. + - Send a termination signal to {iterator}. + - Return. +- Let {record} be the first complete item in {subsequentPayloads}. + - Remove {record} from {subsequentPayloads}. + - Assert: {record} must be a Deferred Fragment Record or a Stream Record. + - If {record} is a Deferred Fragment Record: + - Let {payload} be the result of running + {ResolveDeferredFragmentRecord(record, variableValues, + subsequentPayloads)}. + - If {record} is a Stream Record: + - Let {payload} be the result of running {ResolveStreamRecord(record, + variableValues, subsequentPayloads)}. + - If {payload} is {null}: + - If {subsequentPayloads} is empty: + - Yield a map containing a field `hasNext` with the value {false}. + - Return. + - If {subsequentPayloads} is not empty: + - Continue to the next record in {subsequentPayloads}. + - If {record} is not the final element in {subsequentPayloads} + - Add an entry to {payload} named `hasNext` with the value {true}. + - If {record} is the final element in {subsequentPayloads} + - Add an entry to {payload} named `hasNext` with the value {false}. + - Yield {payload} + ## Executing Selection Sets To execute a selection set, the object value being evaluated and the object type @@ -332,10 +418,13 @@ First, the selection set is turned into a grouped field set; then, each represented field in the grouped field set produces an entry into a response map. -ExecuteSelectionSet(selectionSet, objectType, objectValue, variableValues): +ExecuteSelectionSet(selectionSet, objectType, objectValue, variableValues, +subsequentPayloads, parentPath): -- Let {groupedFieldSet} be the result of {CollectFields(objectType, - selectionSet, variableValues)}. +- If {subsequentPayloads} is not provided, initialize it to the empty set. +- If {parentPath} is not provided, initialize it to an empty list. +- Let {groupedFieldSet} be the result of {CollectFields(objectType, objectValue, + selectionSet, variableValues, subsequentPayloads, parentPath)}. - Initialize {resultMap} to an empty ordered map. - For each {groupedFieldSet} as {responseKey} and {fields}: - Let {fieldName} be the name of the first entry in {fields}. Note: This value @@ -344,7 +433,7 @@ ExecuteSelectionSet(selectionSet, objectType, objectValue, variableValues): {objectType}. - If {fieldType} is defined: - Let {responseValue} be {ExecuteField(objectType, objectValue, fieldType, - fields, variableValues)}. + fields, variableValues, subsequentPayloads, parentPath)}. - Set {responseValue} as the value for {responseKey} in {resultMap}. - Return {resultMap}. @@ -465,7 +554,13 @@ Before execution, the selection set is converted to a grouped field set by calling {CollectFields()}. Each entry in the grouped field set is a list of fields that share a response key (the alias if defined, otherwise the field name). This ensures all fields with the same response key (including those in -referenced fragments) are executed at the same time. +referenced fragments) are executed at the same time. A deferred selection set's +fields will not be included in the grouped field set. Rather, a record +representing the deferred fragment and additional context will be stored in a +list. The executor revisits and resumes execution for the list of deferred +fragment records after the initial execution is initiated. This deferred +execution would ‘re-execute’ fields with the same response key that were present +in the grouped field set. As an example, collecting the fields of this selection set would collect two instances of the field `a` and one of field `b`: @@ -490,7 +585,8 @@ The depth-first-search order of the field groups produced by {CollectFields()} is maintained through execution, ensuring that fields appear in the executed response in a stable and predictable order. -CollectFields(objectType, selectionSet, variableValues, visitedFragments): +CollectFields(objectType, objectValue, selectionSet, variableValues, +deferredFragments, parentPath, visitedFragments): - If {visitedFragments} is not provided, initialize it to the empty set. - Initialize {groupedFields} to an empty ordered map of lists. @@ -513,8 +609,12 @@ CollectFields(objectType, selectionSet, variableValues, visitedFragments): - Append {selection} to the {groupForResponseKey}. - If {selection} is a {FragmentSpread}: - Let {fragmentSpreadName} be the name of {selection}. - - If {fragmentSpreadName} is in {visitedFragments}, continue with the next - {selection} in {selectionSet}. + - If {fragmentSpreadName} provides the directive `@defer` and it's {if} + argument is {true} or is a variable in {variableValues} with the value + {true}: + - Let {deferDirective} be that directive. + - If {fragmentSpreadName} is in {visitedFragments} and {deferDirective} is + not defined, continue with the next {selection} in {selectionSet}. - Add {fragmentSpreadName} to {visitedFragments}. - Let {fragment} be the Fragment in the current Document whose name is {fragmentSpreadName}. @@ -524,6 +624,12 @@ CollectFields(objectType, selectionSet, variableValues, visitedFragments): - If {DoesFragmentTypeApply(objectType, fragmentType)} is false, continue with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. + - If {deferDirective} is defined: + - Let {deferredFragment} be the result of calling + {DeferFragment(objectType, objectValue, fragmentSelectionSet, + parentPath)}. + - Append {deferredFragment} to {deferredFragments}. + - Continue with the next {selection} in {selectionSet}. - Let {fragmentGroupedFieldSet} be the result of calling {CollectFields(objectType, fragmentSelectionSet, variableValues, visitedFragments)}. @@ -539,9 +645,18 @@ CollectFields(objectType, selectionSet, variableValues, visitedFragments): fragmentType)} is false, continue with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {selection}. + - If {InlineFragment} provides the directive `@defer`, let {deferDirective} + be that directive. + - If {deferDirective}'s {if} argument is {true} or is a variable in + {variableValues} with the value {true}: + - Let {deferredFragment} be the result of calling + {DeferFragment(objectType, objectValue, fragmentSelectionSet, + parentPath)}. + - Append {deferredFragment} to {deferredFragments}. + - Continue with the next {selection} in {selectionSet}. - Let {fragmentGroupedFieldSet} be the result of calling {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. + visitedFragments, parentPath)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. @@ -550,6 +665,9 @@ CollectFields(objectType, selectionSet, variableValues, visitedFragments): - Append all items in {fragmentGroup} to {groupForResponseKey}. - Return {groupedFields}. +Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` +directives may be applied in either order since they apply commutatively. + DoesFragmentTypeApply(objectType, fragmentType): - If {fragmentType} is an Object Type: @@ -562,8 +680,47 @@ DoesFragmentTypeApply(objectType, fragmentType): - if {objectType} is a possible type of {fragmentType}, return {true} otherwise return {false}. -Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` -directives may be applied in either order since they apply commutatively. +DeferFragment(objectType, objectValue, fragmentSelectionSet, parentPath): + +- Let {label} be the value or the variable to {deferDirective}'s {label} + argument. +- Let {deferredFragmentRecord} be the result of calling + {CreateDeferredFragmentRecord(label, objectType, objectValue, + fragmentSelectionSet, parentPath)}. +- return {deferredFragmentRecord}. + +#### Deferred Fragment Record + +Let {deferredFragmentRecord} be an inline fragment or fragment spread with +`@defer` provided. + +Deferred Fragment Record is a structure containing: + +- {label}: value derived from the `@defer` directive. +- {objectType}: of the {deferredFragmentRecord}. +- {objectValue}: of the {deferredFragmentRecord}. +- {fragmentSelectionSet}: the top level selection set of + {deferredFragmentRecord}. +- {path}: a list of field names and indices from root to + {deferredFragmentRecord}. + +CreateDeferredFragmentRecord(label, objectType, objectValue, +fragmentSelectionSet, path): + +- If {path} is not provided, initialize it to an empty list. +- Construct a deferred fragment record based on the parameters passed in. + +ResolveDeferredFragmentRecord(deferredFragmentRecord, variableValues, +subsequentPayloads): + +- Let {label}, {objectType}, {objectValue}, {fragmentSelectionSet}, {path} be + the corresponding fields in the deferred fragment record structure. +- Let {payload} be the result of calling + {ExecuteSelectionSet(fragmentSelectionSet, objectType, objectValue, + variableValues, subsequentPayloads, path)}. +- Add an entry to {payload} named `label` with the value {label}. +- Add an entry to {payload} named `path` with the value {path}. +- Return {payload}. ## Executing Fields @@ -573,16 +730,29 @@ coerces any provided argument values, then resolves a value for the field, and finally completes that value either by recursively executing another selection set or coercing a scalar value. -ExecuteField(objectType, objectValue, fieldType, fields, variableValues): +ExecuteField(objectType, objectValue, fieldType, fields, variableValues, +subsequentPayloads, parentPath): - Let {field} be the first entry in {fields}. - Let {fieldName} be the field name of {field}. - Let {argumentValues} be the result of {CoerceArgumentValues(objectType, field, variableValues)} +- If {field} provides the directive `@stream`, let {streamDirective} be that + directive. + - Let {initialCount} be the value or variable provided to {streamDirective}'s + {initialCount} argument. + - Let {resolvedValue} be {ResolveFieldGenerator(objectType, objectValue, + fieldName, argumentValues, initialCount)}. + - Let {result} be the result of calling {CompleteValue(fieldType, fields, + resolvedValue, variableValues, subsequentPayloads, parentPath)}. + - Append {fieldName} to the {path} field of every {subsequentPayloads}. + - Return {result}. - Let {resolvedValue} be {ResolveFieldValue(objectType, objectValue, fieldName, argumentValues)}. -- Return the result of {CompleteValue(fieldType, fields, resolvedValue, - variableValues)}. +- Let {result} be the result of calling {CompleteValue(fieldType, fields, + resolvedValue, variableValues, subsequentPayloads)}. +- Append {fieldName} to the {path} for every {subsequentPayloads}. +- Return {result}. ### Coercing Field Arguments @@ -645,11 +815,20 @@ must only allow usage of variables of appropriate types. While nearly all of GraphQL execution can be described generically, ultimately the internal system exposing the GraphQL interface must provide values. This is exposed via {ResolveFieldValue}, which produces a value for a given field on a -type for a real value. +type for a real value. In addition, {ResolveFieldGenerator} will be exposed to +produce an iterator for a field with `List` return type. The internal system may +optionally define a generator function. In the case where the generator is not +defined, the GraphQL executor provides a default generator. For example, a +trivial generator that yields the entire list upon the first iteration. + +As an example, a {ResolveFieldValue} might accept the {objectType} `Person`, the +{field} {"soulMate"}, and the {objectValue} representing John Lennon. It would +be expected to yield the value representing Yoko Ono. -As an example, this might accept the {objectType} `Person`, the {field} -{"soulMate"}, and the {objectValue} representing John Lennon. It would be -expected to yield the value representing Yoko Ono. +A {ResolveFieldGenerator} might accept the {objectType} `MusicBand`, the {field} +{"members"}, and the {objectValue} representing Beatles. It would be expected to +yield a iterator of values representing, John Lennon, Paul McCartney, Ringo +Starr and George Harrison. ResolveFieldValue(objectType, objectValue, fieldName, argumentValues): @@ -658,18 +837,74 @@ ResolveFieldValue(objectType, objectValue, fieldName, argumentValues): - Return the result of calling {resolver}, providing {objectValue} and {argumentValues}. +ResolveFieldGenerator(objectType, objectValue, fieldName, argumentValues, +initialCount): + +- If {objectType} provide an internal function {generatorResolver} for + generating partially resolved value of a list field named {fieldName}: + - Let {generatorResolver} be the internal function. + - Return the iterator from calling {generatorResolver}, providing + {objectValue}, {argumentValues} and {initialCount}. +- Create {generator} from {ResolveFieldValue(objectType, objectValue, fieldName, + argumentValues)}. +- Return {generator}. + Note: It is common for {resolver} to be asynchronous due to relying on reading an underlying database or networked service to produce a value. This necessitates the rest of a GraphQL executor to handle an asynchronous execution -flow. +flow. In addition, a common implementation of {generator} is to leverage +asynchronous iterators or asynchronous generators provided by many programming +languages. ### Value Completion After resolving the value for a field, it is completed by ensuring it adheres to the expected return type. If the return type is another Object type, then the -field execution process continues recursively. - -CompleteValue(fieldType, fields, result, variableValues): +field execution process continues recursively. In the case where a value +returned for a list type field is an iterator due to `@stream` specified on the +field, value completion iterates over the iterator until the number of items +yield by the iterator satisfies `initialCount` specified on the `@stream` +directive. Unresolved items in the iterator will be stored in a stream record +which the executor resumes to execute after the initial execution finishes. + +#### Stream Record + +Let {streamField} be a list field with a `@stream` directive provided. + +A Stream Record is a structure containing: + +- {label}: value derived from the `@stream` directive's `label` argument. +- {iterator}: created by {ResolveFieldGenerator}. +- {resolvedItems}: items resolved from the {iterator} but not yet delivered. +- {index}: indicating the position of the item in the complete list. +- {path}: a list of field names and indices from root to {streamField}. +- {fields}: the group of fields grouped by CollectFields() for {streamField}. +- {innerType}: inner type of {streamField}'s type. + +CreateStreamRecord(label, initialCount, iterator, resolvedItems, index, fields, +innerType): + +- Construct a stream record based on the parameters passed in. + +ResolveStreamRecord(streamRecord, variableValues, subsequentPayloads): + +- Let {label}, {iterator}, {resolvedItems}, {index}, {path}, {fields}, + {innerType} be the correspondent fields on the Stream Record structure. +- Wait for the next item from {iterator}. +- If an item is not retrieved because {iterator} has completed: + - Return {null} +- Let {item} be the item retrieved from {iterator}. +- Append {index} to {path}. +- Increment {index}. +- Let {payload} be the result of calling CompleteValue(innerType, fields, item, + variableValues, subsequentPayloads, path)}. +- Add an entry to {payload} named `label` with the value {label}. +- Add an entry to {payload} named `path` with the value {path}. +- Append {streamRecord} to {subsequentPayloads}. +- Return {payload}. + +CompleteValue(fieldType, fields, result, variableValues, subsequentPayloads, +parentPath): - If the {fieldType} is a Non-Null type: - Let {innerType} be the inner type of {fieldType}. @@ -680,11 +915,34 @@ CompleteValue(fieldType, fields, result, variableValues): - If {result} is {null} (or another internal value similar to {null} such as {undefined}), return {null}. - If {fieldType} is a List type: + - If {result} is an iterator: + - Let {field} be the first entry in {fields}. + - Let {innerType} be the inner type of {fieldType}. + - Let {streamDirective} be the `@stream` directive provided on {field}. + - Let {initialCount} be the value or variable provided to + {streamDirective}'s {initialCount} argument. + - Let {label} be the value or variable provided to {streamDirective}'s + {label} argument. + - Let {resolvedItems} be an empty list + - For each {members} in {result}: + - Append all items from {members} to {resolvedItems}. + - If the length of {resolvedItems} is greater or equal to {initialCount}: + - Let {initialItems} be the sublist of the first {initialCount} items + from {resolvedItems}. + - Let {remainingItems} be the sublist of the items in {resolvedItems} + after the first {initialCount} items. + - Let {streamRecord} be the result of calling {CreateStreamRecord(label, + initialCount, result, remainingItems, initialCount, fields, innerType, + parentPath)} + - Append {streamRecord} to {subsequentPayloads}. + - Let {result} be {initialItems}. + - Exit for each loop. - If {result} is not a collection of values, raise a _field error_. - Let {innerType} be the inner type of {fieldType}. - Return a list where each list item is the result of calling - {CompleteValue(innerType, fields, resultItem, variableValues)}, where - {resultItem} is each item in {result}. + {CompleteValue(innerType, fields, resultItem, variableValues, + subsequentPayloads, parentPath)}, where {resultItem} is each item in + {result}. - If {fieldType} is a Scalar or Enum type: - Return the result of {CoerceResult(fieldType, result)}. - If {fieldType} is an Object, Interface, or Union type: @@ -694,8 +952,8 @@ CompleteValue(fieldType, fields, result, variableValues): - Let {objectType} be {ResolveAbstractType(fieldType, result)}. - Let {subSelectionSet} be the result of calling {MergeSelectionSets(fields)}. - Return the result of evaluating {ExecuteSelectionSet(subSelectionSet, - objectType, result, variableValues)} _normally_ (allowing for - parallelization). + objectType, result, variableValues, subsequentPayloads, parentPath)} + _normally_ (allowing for parallelization). **Coercing Results** diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index 8dcd9234c..db28604eb 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -10,11 +10,11 @@ the case that any _field error_ was raised on a field and was replaced with ## Response Format -A response to a GraphQL request must be a map. +A response to a GraphQL request must be a map or an event stream of maps. -If the request raised any errors, the response map must contain an entry with -key `errors`. The value of this entry is described in the "Errors" section. If -the request completed without raising any errors, this entry must not be +If the operation encountered any errors, the response map must contain an entry +with key `errors`. The value of this entry is described in the "Errors" section. +If the request completed without raising any errors, this entry must not be present. If the request included execution, the response map must contain an entry with @@ -22,6 +22,24 @@ key `data`. The value of this entry is described in the "Data" section. If the request failed before execution, due to a syntax error, missing information, or validation error, this entry must not be present. +When the response of the GraphQL operation is an event stream, the first value +will be the initial response. All subsequent values may contain `label` and +`path` entries. These two entries are used by clients to identify the the +`@defer` or `@stream` directive from the GraphQL operation that triggered this +value to be returned by the event stream. The combination of these two entries +must be unique across all values returned by the event stream. + +If the response of the GraphQL operation is an event stream, each response map +must contain an entry with key `hasNext`. The value of this entry is `true` for +all but the last response in the stream. The value of this entry is `false` for +the last response of the stream. This entry is not required for GraphQL +operations that return a single response map. + +The GraphQL server may determine there are no more values in the event stream +after a previous value with `hasNext` equal to `true` has been emitted. In this +case the last value in the event stream should be a map without `data`, `label`, +and `path` entries, and a `hasNext` entry with a value of `false`. + The response map may also contain an entry with key `extensions`. This entry, if set, must have a map as its value. This entry is reserved for implementors to extend the protocol however they see fit, and hence there are no additional @@ -42,6 +60,11 @@ requested operation. If the operation was a query, this output will be an object of the query root operation type; if the operation was a mutation, this output will be an object of the mutation root operation type. +If the result of the operation is an event stream, the `data` entry in +subsequent values will be an object of the type of a particular field in the +GraphQL result. The adjacent `path` field will contain the path segments of the +field this data is associated with. + If an error was raised before execution begins, the `data` entry should not be present in the result. @@ -107,14 +130,8 @@ syntax element. If an error can be associated to a particular field in the GraphQL result, it must contain an entry with the key `path` that details the path of the response field which experienced the error. This allows clients to identify whether a -`null` result is intentional or caused by a runtime error. - -This field should be a list of path segments starting at the root of the -response and ending with the field associated with the error. Path segments that -represent fields should be strings, and path segments that represent list -indices should be 0-indexed integers. If the error happens in an aliased field, -the path to the error should use the aliased name, since it represents a path in -the response, not in the request. +`null` result is intentional or caused by a runtime error. The value of this +field is described in the "Path" section. For example, if fetching one of the friends' names fails in the following operation: @@ -244,6 +261,32 @@ discouraged. } ``` +## Path + +A `path` field allows for the association to a particular field in a GraphQL +result. This field should be a list of path segments starting at the root of the +response and ending with the field to be associated with. Path segments that +represent fields should be strings, and path segments that represent list +indices should be 0-indexed integers. If the path is associated to an aliased +field, the path should use the aliased name, since it represents a path in the +response, not in the request. + +When the `path` field is present on a GraphQL response, it indicates that the +`data` field is not the root query or mutation result, but is rather associated +to a particular field in the root result. + +When the `path` field is present on an "Error result", it indicates the response +field which experienced the error. + +## Label + +If the response of the GraphQL operation is an event stream, subsequent values +may contain a string field `label`. This `label` is the same label passed to the +`@defer` or `@stream` directive that triggered this value. This allows clients +to identify which `@defer` or `@stream` directive is associated with this value. +`label` will not be present if the corresponding `@defer` or `@stream` directive +is not passed a `label` argument. + ## Serialization Format GraphQL does not require a specific serialization format. However, clients From 43e99975cab70b0929fcce669a380caf8874f5a3 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Wed, 17 Feb 2021 13:03:38 -0500 Subject: [PATCH 02/65] fix typos # Conflicts: # spec/Section 6 -- Execution.md --- spec/Section 6 -- Execution.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 0bd2ed084..994e69a19 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -827,8 +827,8 @@ be expected to yield the value representing Yoko Ono. A {ResolveFieldGenerator} might accept the {objectType} `MusicBand`, the {field} {"members"}, and the {objectValue} representing Beatles. It would be expected to -yield a iterator of values representing, John Lennon, Paul McCartney, Ringo -Starr and George Harrison. +yield a iterator of values representing John Lennon, Paul McCartney, Ringo Starr +and George Harrison. ResolveFieldValue(objectType, objectValue, fieldName, argumentValues): From cb5a3f4ce223947172e682721f809057c44b5d67 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Wed, 17 Feb 2021 13:06:20 -0500 Subject: [PATCH 03/65] clear up that it is legal to support either defer or stream individually # Conflicts: # spec/Section 3 -- Type System.md --- spec/Section 3 -- Type System.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/spec/Section 3 -- Type System.md b/spec/Section 3 -- Type System.md index dec2cff30..7aa000b5c 100644 --- a/spec/Section 3 -- Type System.md +++ b/spec/Section 3 -- Type System.md @@ -1903,9 +1903,9 @@ by a validator, executor, or client tool such as a code generator. GraphQL implementations should provide the `@skip` and `@include` directives. GraphQL implementations are not required to implement the `@defer` and `@stream` -directives. If they are implemented, they must be implemented according to this -specification. GraphQL implementations that do not support these directives must -not make them available via introspection. +directives. If either or both of these directives are implemented, they must be +implemented according to this specification. GraphQL implementations that do not +support these directives must not make them available via introspection. GraphQL implementations that support the type system definition language must provide the `@deprecated` directive if representing deprecated portions of the From 0eb4426ef2c6583af69f3b87131507aa635de8bd Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Wed, 17 Feb 2021 13:12:04 -0500 Subject: [PATCH 04/65] Add sumary of arguments to Type System --- spec/Section 3 -- Type System.md | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/spec/Section 3 -- Type System.md b/spec/Section 3 -- Type System.md index 7aa000b5c..b0ddd1fe7 100644 --- a/spec/Section 3 -- Type System.md +++ b/spec/Section 3 -- Type System.md @@ -2155,6 +2155,15 @@ fragment someFragment on User { } ``` +#### @defer Arguments + +- `if: Boolean` - When true, fragment may be deferred. If omitted, defaults to + `true`. +- `label: String` - A unique label across all `@defer` and `@stream` directives + in an operation. This label should be used by GraphQL clients to identify the + data from patch responses and associate it with the correct fragments. If + provided, the GraphQL Server must add it to the payload. + ### @stream ```graphql @@ -2175,3 +2184,14 @@ query myQuery($shouldStream: Boolean) { } } ``` + +#### @stream Arguments + +- `if: Boolean` - When true, field may be streamed. If omitted, defaults to + `true`. +- `label: String` - A unique label across all `@defer` and `@stream` directives + in an operation. This label should be used by GraphQL clients to identify the + data from patch responses and associate it with the correct fragments. If + provided, the GraphQL Server must add it to the payload. +- `initialCount: Int` - The number of list items the server should return as + part of the initial response. If omitted, defaults to `0`. From 43bfe0104fbe3719ca6448bdd05b1b155450f609 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Sat, 15 May 2021 11:24:10 -0400 Subject: [PATCH 05/65] Update Section 3 -- Type System.md # Conflicts: # spec/Section 3 -- Type System.md --- spec/Section 3 -- Type System.md | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/spec/Section 3 -- Type System.md b/spec/Section 3 -- Type System.md index b0ddd1fe7..94299b2a4 100644 --- a/spec/Section 3 -- Type System.md +++ b/spec/Section 3 -- Type System.md @@ -2157,8 +2157,9 @@ fragment someFragment on User { #### @defer Arguments -- `if: Boolean` - When true, fragment may be deferred. If omitted, defaults to - `true`. +- `if: Boolean` - When `true`, fragment may be deferred. When `false`, fragment + will not be deferred and data will be included in the initial response. If + omitted, defaults to `true`. - `label: String` - A unique label across all `@defer` and `@stream` directives in an operation. This label should be used by GraphQL clients to identify the data from patch responses and associate it with the correct fragments. If @@ -2187,8 +2188,9 @@ query myQuery($shouldStream: Boolean) { #### @stream Arguments -- `if: Boolean` - When true, field may be streamed. If omitted, defaults to - `true`. +- `if: Boolean` - When `true`, field may be streamed. When `false`, the field + will not be streamed and all list items will be included in the initial + response. If omitted, defaults to `true`. - `label: String` - A unique label across all `@defer` and `@stream` directives in an operation. This label should be used by GraphQL clients to identify the data from patch responses and associate it with the correct fragments. If From acb5bf09756b8f4603c58d29c1114f0ee9f3fca4 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Fri, 19 Nov 2021 15:41:51 -0500 Subject: [PATCH 06/65] clarification on defer/stream requirement # Conflicts: # spec/Section 3 -- Type System.md --- spec/Section 3 -- Type System.md | 22 +++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/spec/Section 3 -- Type System.md b/spec/Section 3 -- Type System.md index 94299b2a4..d21dcd9e3 100644 --- a/spec/Section 3 -- Type System.md +++ b/spec/Section 3 -- Type System.md @@ -2157,9 +2157,9 @@ fragment someFragment on User { #### @defer Arguments -- `if: Boolean` - When `true`, fragment may be deferred. When `false`, fragment - will not be deferred and data will be included in the initial response. If - omitted, defaults to `true`. +- `if: Boolean` - When `true`, fragment _should_ be deferred. When `false`, + fragment will not be deferred and data will be included in the initial + response. If omitted, defaults to `true`. - `label: String` - A unique label across all `@defer` and `@stream` directives in an operation. This label should be used by GraphQL clients to identify the data from patch responses and associate it with the correct fragments. If @@ -2188,8 +2188,8 @@ query myQuery($shouldStream: Boolean) { #### @stream Arguments -- `if: Boolean` - When `true`, field may be streamed. When `false`, the field - will not be streamed and all list items will be included in the initial +- `if: Boolean` - When `true`, field _should_ be streamed. When `false`, the + field will not be streamed and all list items will be included in the initial response. If omitted, defaults to `true`. - `label: String` - A unique label across all `@defer` and `@stream` directives in an operation. This label should be used by GraphQL clients to identify the @@ -2197,3 +2197,15 @@ query myQuery($shouldStream: Boolean) { provided, the GraphQL Server must add it to the payload. - `initialCount: Int` - The number of list items the server should return as part of the initial response. If omitted, defaults to `0`. + +Note: The ability to defer and/or stream parts of a response can have a +potentially significant impact on application performance. Developers generally +need clear, predictable control over their application's performance. It is +highly recommended that GraphQL servers honor the `@defer` and `@stream` +directives on each execution. However, the specification allows advanced +use-cases where the server can determine that it is more performant to not defer +and/or stream. Therefore, GraphQL clients _must_ be able to process a response +that ignores the `@defer` and/or `@stream` directives. This also applies to the +`initialCount` argument on the `@stream` directive. Clients _must_ be able to +process a streamed response that contains a different number of initial list +items than what was specified in the `initialCount` argument. From abea59b073c168ac538ca4accc1c3a910023963d Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Sat, 20 Nov 2021 11:58:41 -0500 Subject: [PATCH 07/65] clarify negative values of initialCount # Conflicts: # spec/Section 3 -- Type System.md --- spec/Section 3 -- Type System.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/spec/Section 3 -- Type System.md b/spec/Section 3 -- Type System.md index d21dcd9e3..911be3633 100644 --- a/spec/Section 3 -- Type System.md +++ b/spec/Section 3 -- Type System.md @@ -2196,7 +2196,8 @@ query myQuery($shouldStream: Boolean) { data from patch responses and associate it with the correct fragments. If provided, the GraphQL Server must add it to the payload. - `initialCount: Int` - The number of list items the server should return as - part of the initial response. If omitted, defaults to `0`. + part of the initial response. If omitted, defaults to `0`. If the value of + this argument is less than `0`, it is treated the same as `0`. Note: The ability to defer and/or stream parts of a response can have a potentially significant impact on application performance. Developers generally From 139d69f83437a8284bb16d486a531131302e13b3 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 25 Nov 2021 11:06:48 -0500 Subject: [PATCH 08/65] allow extensions only subsequent payloads # Conflicts: # spec/Section 7 -- Response.md --- spec/Section 7 -- Response.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index db28604eb..19b073161 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -43,7 +43,9 @@ and `path` entries, and a `hasNext` entry with a value of `false`. The response map may also contain an entry with key `extensions`. This entry, if set, must have a map as its value. This entry is reserved for implementors to extend the protocol however they see fit, and hence there are no additional -restrictions on its contents. +restrictions on its contents. When the response of the GraphQL operation is an +event stream, implementors may send subsequent payloads containing only +`hasNext` and `extensions` entries. To ensure future changes to the protocol do not break existing services and clients, the top level response map must not contain any entries other than the From de5004badca3d02f583379dc02a7dfc19eb0e73c Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Fri, 26 Nov 2021 09:25:24 -0500 Subject: [PATCH 09/65] fix typo # Conflicts: # spec/Section 7 -- Response.md --- spec/Section 7 -- Response.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index 19b073161..353f40d2c 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -24,10 +24,10 @@ validation error, this entry must not be present. When the response of the GraphQL operation is an event stream, the first value will be the initial response. All subsequent values may contain `label` and -`path` entries. These two entries are used by clients to identify the the -`@defer` or `@stream` directive from the GraphQL operation that triggered this -value to be returned by the event stream. The combination of these two entries -must be unique across all values returned by the event stream. +`path` entries. These two entries are used by clients to identify the `@defer` +or `@stream` directive from the GraphQL operation that triggered this value to +be returned by the event stream. The combination of these two entries must be +unique across all values returned by the event stream. If the response of the GraphQL operation is an event stream, each response map must contain an entry with key `hasNext`. The value of this entry is `true` for From 9e89f42f5c29d9e865db29b2b5b02e825cff107d Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 18 Aug 2022 17:27:07 -0400 Subject: [PATCH 10/65] Raise a field error if initialCount is less than zero # Conflicts: # spec/Section 3 -- Type System.md # spec/Section 6 -- Execution.md --- spec/Section 3 -- Type System.md | 4 ++-- spec/Section 6 -- Execution.md | 1 + 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/spec/Section 3 -- Type System.md b/spec/Section 3 -- Type System.md index 911be3633..9eeea3907 100644 --- a/spec/Section 3 -- Type System.md +++ b/spec/Section 3 -- Type System.md @@ -2196,8 +2196,8 @@ query myQuery($shouldStream: Boolean) { data from patch responses and associate it with the correct fragments. If provided, the GraphQL Server must add it to the payload. - `initialCount: Int` - The number of list items the server should return as - part of the initial response. If omitted, defaults to `0`. If the value of - this argument is less than `0`, it is treated the same as `0`. + part of the initial response. If omitted, defaults to `0`. A field error will + be raised if the value of this argument is less than `0`. Note: The ability to defer and/or stream parts of a response can have a potentially significant impact on application performance. Developers generally diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 994e69a19..bf2b2ebc1 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -921,6 +921,7 @@ parentPath): - Let {streamDirective} be the `@stream` directive provided on {field}. - Let {initialCount} be the value or variable provided to {streamDirective}'s {initialCount} argument. + - If {initialCount} is less than zero, raise a _field error_. - Let {label} be the value or variable provided to {streamDirective}'s {label} argument. - Let {resolvedItems} be an empty list From f894ba30217a2264af09f508c58de1d11e98a89a Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Mon, 6 Dec 2021 13:15:13 -0500 Subject: [PATCH 11/65] data is not necessarily an object in subsequent payloads # Conflicts: # spec/Section 7 -- Response.md --- spec/Section 7 -- Response.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index 353f40d2c..47ee7e893 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -63,9 +63,9 @@ of the query root operation type; if the operation was a mutation, this output will be an object of the mutation root operation type. If the result of the operation is an event stream, the `data` entry in -subsequent values will be an object of the type of a particular field in the -GraphQL result. The adjacent `path` field will contain the path segments of the -field this data is associated with. +subsequent values will be of the type of a particular field in the GraphQL +result. The adjacent `path` field will contain the path segments of the field +this data is associated with. If an error was raised before execution begins, the `data` entry should not be present in the result. From 08053d715b0c3a7be6e3efff04cf580b28934758 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Mon, 6 Dec 2021 17:20:37 -0500 Subject: [PATCH 12/65] add Defer And Stream Directives Are Used On Valid Root Field rule --- spec/Section 5 -- Validation.md | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/spec/Section 5 -- Validation.md b/spec/Section 5 -- Validation.md index 0c1a06c80..8303ce776 100644 --- a/spec/Section 5 -- Validation.md +++ b/spec/Section 5 -- Validation.md @@ -1528,6 +1528,34 @@ query ($foo: Boolean = true, $bar: Boolean = false) { } ``` +### Defer And Stream Directives Are Used On Valid Root Field + +** Formal Specification ** + +- For every {directive} in a document. +- Let {directiveName} be the name of {directive}. +- Let {mutationType} be the root Mutation type in {schema}. +- Let {subscriptionType} be the root Subscription type in {schema}. +- If {directiveName} is "defer" or "stream": + - The parent type of {directive} must not be {mutationType} or + {subscriptionType}. + +**Explanatory Text** + +The defer and stream directives are not allowed to be used on root fields of the +mutation or subscription type. + +For example, the following document will not pass validation because `@defer` +has been used on a root mutation field: + +```raw graphql counter-example +mutation { + ... @defer { + mutationField + } +} +``` + ### Stream Directives Are Used On List Fields **Formal Specification** From e19246b0f4fb21d78fc400638b711239149bd3a4 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 18 Aug 2022 17:21:50 -0400 Subject: [PATCH 13/65] wait for parent async record to ensure correct order of payloads # Conflicts: # spec/Section 6 -- Execution.md --- cspell.yml | 1 - spec/Section 6 -- Execution.md | 192 ++++++++++++++++++--------------- 2 files changed, 108 insertions(+), 85 deletions(-) diff --git a/cspell.yml b/cspell.yml index 66902e41a..8bc4a231c 100644 --- a/cspell.yml +++ b/cspell.yml @@ -11,7 +11,6 @@ words: - openwebfoundation - parallelization - structs - - sublist - subselection # Fictional characters / examples - alderaan diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index bf2b2ebc1..f78af2d39 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -418,13 +418,13 @@ First, the selection set is turned into a grouped field set; then, each represented field in the grouped field set produces an entry into a response map. -ExecuteSelectionSet(selectionSet, objectType, objectValue, variableValues, -subsequentPayloads, parentPath): +ExecuteSelectionSet(selectionSet, objectType, objectValue, variableValues, path, +subsequentPayloads, asyncRecord): +- If {path} is not provided, initialize it to an empty list. - If {subsequentPayloads} is not provided, initialize it to the empty set. -- If {parentPath} is not provided, initialize it to an empty list. - Let {groupedFieldSet} be the result of {CollectFields(objectType, objectValue, - selectionSet, variableValues, subsequentPayloads, parentPath)}. + selectionSet, variableValues, path subsequentPayloads, asyncRecord)}. - Initialize {resultMap} to an empty ordered map. - For each {groupedFieldSet} as {responseKey} and {fields}: - Let {fieldName} be the name of the first entry in {fields}. Note: This value @@ -433,7 +433,7 @@ subsequentPayloads, parentPath): {objectType}. - If {fieldType} is defined: - Let {responseValue} be {ExecuteField(objectType, objectValue, fieldType, - fields, variableValues, subsequentPayloads, parentPath)}. + fields, variableValues, path, subsequentPayloads, asyncRecord)}. - Set {responseValue} as the value for {responseKey} in {resultMap}. - Return {resultMap}. @@ -585,8 +585,8 @@ The depth-first-search order of the field groups produced by {CollectFields()} is maintained through execution, ensuring that fields appear in the executed response in a stable and predictable order. -CollectFields(objectType, objectValue, selectionSet, variableValues, -deferredFragments, parentPath, visitedFragments): +CollectFields(objectType, objectValue, selectionSet, variableValues, path, +subsequentPayloads, asyncRecord, visitedFragments): - If {visitedFragments} is not provided, initialize it to the empty set. - Initialize {groupedFields} to an empty ordered map of lists. @@ -625,14 +625,16 @@ deferredFragments, parentPath, visitedFragments): with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. - If {deferDirective} is defined: - - Let {deferredFragment} be the result of calling - {DeferFragment(objectType, objectValue, fragmentSelectionSet, - parentPath)}. - - Append {deferredFragment} to {deferredFragments}. + - Let {label} be the value or the variable to {deferDirective}'s {label} + argument. + - Let {deferredFragmentRecord} be the result of calling + {CreateDeferredFragmentRecord(label, objectType, objectValue, + fragmentSelectionSet, path, asyncRecord)}. + - Append {deferredFragmentRecord} to {subsequentPayloads}. - Continue with the next {selection} in {selectionSet}. - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. + {CollectFields(objectType, objectValue, fragmentSelectionSet, + variableValues, path, subsequentPayloads, asyncRecord, visitedFragments)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. @@ -649,14 +651,16 @@ deferredFragments, parentPath, visitedFragments): be that directive. - If {deferDirective}'s {if} argument is {true} or is a variable in {variableValues} with the value {true}: - - Let {deferredFragment} be the result of calling - {DeferFragment(objectType, objectValue, fragmentSelectionSet, - parentPath)}. - - Append {deferredFragment} to {deferredFragments}. + - Let {label} be the value or the variable to {deferDirective}'s {label} + argument. + - Let {deferredFragmentRecord} be the result of calling + {CreateDeferredFragmentRecord(label, objectType, objectValue, + fragmentSelectionSet, path, asyncRecord)}. + - Append {deferredFragmentRecord} to {subsequentPayloads}. - Continue with the next {selection} in {selectionSet}. - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments, parentPath)}. + {CollectFields(objectType, objectValue, fragmentSelectionSet, + variableValues, path, subsequentPayloads, asyncRecord, visitedFragments)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. @@ -680,44 +684,53 @@ DoesFragmentTypeApply(objectType, fragmentType): - if {objectType} is a possible type of {fragmentType}, return {true} otherwise return {false}. -DeferFragment(objectType, objectValue, fragmentSelectionSet, parentPath): +#### Async Payload Record + +An Async Payload Record is either a Deferred Fragment Record or a Stream Record. +All Async Payload Records are structures containing: -- Let {label} be the value or the variable to {deferDirective}'s {label} - argument. -- Let {deferredFragmentRecord} be the result of calling - {CreateDeferredFragmentRecord(label, objectType, objectValue, - fragmentSelectionSet, parentPath)}. -- return {deferredFragmentRecord}. +- {label}: value derived from the corresponding `@defer` or `@stream` directive. +- {parentRecord}: optionally an Async Payload Record. +- {errors}: a list of field errors encountered during execution. +- {dataExecution}: A result that can notify when the corresponding execution has + completed. +- {path}: a list of field names and indices from root to the location of the + corresponding `@defer` or `@stream` directive. #### Deferred Fragment Record Let {deferredFragmentRecord} be an inline fragment or fragment spread with `@defer` provided. -Deferred Fragment Record is a structure containing: +Deferred Fragment Record is a structure containing all the entries of Async +Payload Record, and additionally: -- {label}: value derived from the `@defer` directive. - {objectType}: of the {deferredFragmentRecord}. - {objectValue}: of the {deferredFragmentRecord}. - {fragmentSelectionSet}: the top level selection set of {deferredFragmentRecord}. -- {path}: a list of field names and indices from root to - {deferredFragmentRecord}. CreateDeferredFragmentRecord(label, objectType, objectValue, -fragmentSelectionSet, path): +fragmentSelectionSet, path, parentRecord): -- If {path} is not provided, initialize it to an empty list. - Construct a deferred fragment record based on the parameters passed in. +- Initialize {errors} to an empty list. ResolveDeferredFragmentRecord(deferredFragmentRecord, variableValues, subsequentPayloads): -- Let {label}, {objectType}, {objectValue}, {fragmentSelectionSet}, {path} be - the corresponding fields in the deferred fragment record structure. -- Let {payload} be the result of calling - {ExecuteSelectionSet(fragmentSelectionSet, objectType, objectValue, - variableValues, subsequentPayloads, path)}. +- Let {label}, {objectType}, {objectValue}, {fragmentSelectionSet}, {path}, + {parentRecord} be the corresponding fields in the deferred fragment record + structure. +- Let {dataExecution} be the asynchronous future value of: + - Let {payload} be the result of {ExecuteSelectionSet(fragmentSelectionSet, + objectType, objectValue, variableValues, path, subsequentPayloads, + deferredFragmentRecord)}. + - If {parentRecord} is defined: + - Wait for the result of {dataExecution} on {parentRecord}. + - Return {payload}. +- Set {dataExecution} on {deferredFragmentRecord}. +- Let {payload} be the result of waiting for {dataExecution}. - Add an entry to {payload} named `label` with the value {label}. - Add an entry to {payload} named `path` with the value {path}. - Return {payload}. @@ -730,11 +743,12 @@ coerces any provided argument values, then resolves a value for the field, and finally completes that value either by recursively executing another selection set or coercing a scalar value. -ExecuteField(objectType, objectValue, fieldType, fields, variableValues, -subsequentPayloads, parentPath): +ExecuteField(objectType, objectValue, fieldType, fields, variableValues, path, +subsequentPayloads, asyncRecord): - Let {field} be the first entry in {fields}. - Let {fieldName} be the field name of {field}. +- Append {fieldName} to {path}. - Let {argumentValues} be the result of {CoerceArgumentValues(objectType, field, variableValues)} - If {field} provides the directive `@stream`, let {streamDirective} be that @@ -742,16 +756,14 @@ subsequentPayloads, parentPath): - Let {initialCount} be the value or variable provided to {streamDirective}'s {initialCount} argument. - Let {resolvedValue} be {ResolveFieldGenerator(objectType, objectValue, - fieldName, argumentValues, initialCount)}. + fieldName, argumentValues)}. - Let {result} be the result of calling {CompleteValue(fieldType, fields, - resolvedValue, variableValues, subsequentPayloads, parentPath)}. - - Append {fieldName} to the {path} field of every {subsequentPayloads}. + resolvedValue, variableValues, path, subsequentPayloads, asyncRecord)}. - Return {result}. - Let {resolvedValue} be {ResolveFieldValue(objectType, objectValue, fieldName, argumentValues)}. - Let {result} be the result of calling {CompleteValue(fieldType, fields, - resolvedValue, variableValues, subsequentPayloads)}. -- Append {fieldName} to the {path} for every {subsequentPayloads}. + resolvedValue, variableValues, path, subsequentPayloads, asyncRecord)}. - Return {result}. ### Coercing Field Arguments @@ -837,14 +849,13 @@ ResolveFieldValue(objectType, objectValue, fieldName, argumentValues): - Return the result of calling {resolver}, providing {objectValue} and {argumentValues}. -ResolveFieldGenerator(objectType, objectValue, fieldName, argumentValues, -initialCount): +ResolveFieldGenerator(objectType, objectValue, fieldName, argumentValues): - If {objectType} provide an internal function {generatorResolver} for generating partially resolved value of a list field named {fieldName}: - Let {generatorResolver} be the internal function. - Return the iterator from calling {generatorResolver}, providing - {objectValue}, {argumentValues} and {initialCount}. + {objectValue} and {argumentValues}. - Create {generator} from {ResolveFieldValue(objectType, objectValue, fieldName, argumentValues)}. - Return {generator}. @@ -864,52 +875,58 @@ field execution process continues recursively. In the case where a value returned for a list type field is an iterator due to `@stream` specified on the field, value completion iterates over the iterator until the number of items yield by the iterator satisfies `initialCount` specified on the `@stream` -directive. Unresolved items in the iterator will be stored in a stream record -which the executor resumes to execute after the initial execution finishes. +directive. #### Stream Record Let {streamField} be a list field with a `@stream` directive provided. -A Stream Record is a structure containing: +A Stream Record is a structure containing all the entries of Async Payload +Record, and additionally: -- {label}: value derived from the `@stream` directive's `label` argument. - {iterator}: created by {ResolveFieldGenerator}. -- {resolvedItems}: items resolved from the {iterator} but not yet delivered. - {index}: indicating the position of the item in the complete list. -- {path}: a list of field names and indices from root to {streamField}. - {fields}: the group of fields grouped by CollectFields() for {streamField}. - {innerType}: inner type of {streamField}'s type. -CreateStreamRecord(label, initialCount, iterator, resolvedItems, index, fields, -innerType): +CreateStreamRecord(label, iterator, index, fields, innerType, path, +parentRecord): - Construct a stream record based on the parameters passed in. +- Initialize {errors} to an empty list. ResolveStreamRecord(streamRecord, variableValues, subsequentPayloads): -- Let {label}, {iterator}, {resolvedItems}, {index}, {path}, {fields}, +- Let {label}, {parentRecord}, {iterator}, {index}, {path}, {fields}, {innerType} be the correspondent fields on the Stream Record structure. -- Wait for the next item from {iterator}. -- If an item is not retrieved because {iterator} has completed: - - Return {null} -- Let {item} be the item retrieved from {iterator}. -- Append {index} to {path}. -- Increment {index}. -- Let {payload} be the result of calling CompleteValue(innerType, fields, item, - variableValues, subsequentPayloads, path)}. +- Let {indexPath} be {path} with {index} appended. +- Let {dataExecution} be the asynchronous future value of: + - Wait for the next item from {iterator}. + - If an item is not retrieved because {iterator} has completed: + - Return {null}. + - Let {item} be the item retrieved from {iterator}. + - Let {payload} be the result of calling {CompleteValue(innerType, fields, + item, variableValues, indexPath, subsequentPayloads, parentRecord)}. + - Increment {index}. + - Let {nextStreamRecord} be the result of calling {CreateStreamRecord(label, + iterator, index, fields, innerType, path, streamRecord)}. + - Append {nextStreamRecord} to {subsequentPayloads}. + - If {parentRecord} is defined: + - Wait for the result of {dataExecution} on {parentRecord}. + - Return {payload}. +- Set {dataExecution} on {streamRecord}. +- Let {payload} be the result of waiting for {dataExecution}. - Add an entry to {payload} named `label` with the value {label}. -- Add an entry to {payload} named `path` with the value {path}. -- Append {streamRecord} to {subsequentPayloads}. +- Add an entry to {payload} named `path` with the value {indexPath}. - Return {payload}. -CompleteValue(fieldType, fields, result, variableValues, subsequentPayloads, -parentPath): +CompleteValue(fieldType, fields, result, variableValues, path, +subsequentPayloads, asyncRecord): - If the {fieldType} is a Non-Null type: - Let {innerType} be the inner type of {fieldType}. - Let {completedResult} be the result of calling {CompleteValue(innerType, - fields, result, variableValues)}. + fields, result, variableValues, path)}. - If {completedResult} is {null}, raise a _field error_. - Return {completedResult}. - If {result} is {null} (or another internal value similar to {null} such as @@ -924,26 +941,33 @@ parentPath): - If {initialCount} is less than zero, raise a _field error_. - Let {label} be the value or variable provided to {streamDirective}'s {label} argument. - - Let {resolvedItems} be an empty list - - For each {members} in {result}: - - Append all items from {members} to {resolvedItems}. - - If the length of {resolvedItems} is greater or equal to {initialCount}: - - Let {initialItems} be the sublist of the first {initialCount} items - from {resolvedItems}. - - Let {remainingItems} be the sublist of the items in {resolvedItems} - after the first {initialCount} items. + - Let {initialItems} be an empty list + - Let {index} be zero. + - While {result} is not closed: + - If {streamDirective} was not provided or {index} is not greater than or + equal to {initialCount}: + - Wait for the next item from {result}. + - Let {resultItem} be the item retrieved from {result}. + - Let {indexPath} be {path} with {index} appended. + - Let {resolvedItem} be the result of calling {CompleteValue(innerType, + fields, resultItem, variableValues, indexPath, subsequentPayloads, + asyncRecord)}. + - Append {resolvedItem} to {initialItems}. + - Increment {index}. + - If {streamDirective} was provided and {index} is greater than or equal + to {initialCount}: - Let {streamRecord} be the result of calling {CreateStreamRecord(label, - initialCount, result, remainingItems, initialCount, fields, innerType, - parentPath)} + result, index, fields, innerType, path, asyncRecord)}. - Append {streamRecord} to {subsequentPayloads}. - Let {result} be {initialItems}. - - Exit for each loop. + - Exit while loop. + - Return {initialItems}. - If {result} is not a collection of values, raise a _field error_. - Let {innerType} be the inner type of {fieldType}. - Return a list where each list item is the result of calling - {CompleteValue(innerType, fields, resultItem, variableValues, - subsequentPayloads, parentPath)}, where {resultItem} is each item in - {result}. + {CompleteValue(innerType, fields, resultItem, variableValues, indexPath, + subsequentPayloads, asyncRecord)}, where {resultItem} is each item in + {result} and {indexPath} is {path} with the index of the item appended. - If {fieldType} is a Scalar or Enum type: - Return the result of {CoerceResult(fieldType, result)}. - If {fieldType} is an Object, Interface, or Union type: @@ -953,7 +977,7 @@ parentPath): - Let {objectType} be {ResolveAbstractType(fieldType, result)}. - Let {subSelectionSet} be the result of calling {MergeSelectionSets(fields)}. - Return the result of evaluating {ExecuteSelectionSet(subSelectionSet, - objectType, result, variableValues, subsequentPayloads, parentPath)} + objectType, result, variableValues, path, subsequentPayloads, asyncRecord)} _normally_ (allowing for parallelization). **Coercing Results** From 2ecd0af3c4541e1aa6d14a014084d3519aaf403f Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Mon, 20 Dec 2021 16:00:43 -0500 Subject: [PATCH 14/65] Simplify execution, payloads should begin execution immediately # Conflicts: # spec/Section 6 -- Execution.md --- spec/Section 6 -- Execution.md | 115 ++++++++++----------------------- 1 file changed, 34 insertions(+), 81 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index f78af2d39..a8062a7ae 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -381,21 +381,13 @@ YieldSubsequentPayloads(subsequentPayloads): - While {subsequentPayloads} is not empty: - If a termination signal is received: - For each {record} in {subsequentPayloads}: - - If {record} is a Stream Record: - - Let {iterator} be the correspondent fields on the Stream Record - structure. + - If {record} contains {iterator}: - Send a termination signal to {iterator}. - Return. -- Let {record} be the first complete item in {subsequentPayloads}. +- Let {record} be the first item in {subsequentPayloads} with a completed + {dataExecution}. - Remove {record} from {subsequentPayloads}. - - Assert: {record} must be a Deferred Fragment Record or a Stream Record. - - If {record} is a Deferred Fragment Record: - - Let {payload} be the result of running - {ResolveDeferredFragmentRecord(record, variableValues, - subsequentPayloads)}. - - If {record} is a Stream Record: - - Let {payload} be the result of running {ResolveStreamRecord(record, - variableValues, subsequentPayloads)}. + - Let {payload} be the completed result returned by {dataExecution}. - If {payload} is {null}: - If {subsequentPayloads} is empty: - Yield a map containing a field `hasNext` with the value {false}. @@ -627,10 +619,9 @@ subsequentPayloads, asyncRecord, visitedFragments): - If {deferDirective} is defined: - Let {label} be the value or the variable to {deferDirective}'s {label} argument. - - Let {deferredFragmentRecord} be the result of calling - {CreateDeferredFragmentRecord(label, objectType, objectValue, - fragmentSelectionSet, path, asyncRecord)}. - - Append {deferredFragmentRecord} to {subsequentPayloads}. + - Call {ExecuteDeferredFragment(label, objectType, objectValue, + fragmentSelectionSet, path, variableValues, asyncRecord, + subsequentPayloads)}. - Continue with the next {selection} in {selectionSet}. - Let {fragmentGroupedFieldSet} be the result of calling {CollectFields(objectType, objectValue, fragmentSelectionSet, @@ -653,10 +644,8 @@ subsequentPayloads, asyncRecord, visitedFragments): {variableValues} with the value {true}: - Let {label} be the value or the variable to {deferDirective}'s {label} argument. - - Let {deferredFragmentRecord} be the result of calling - {CreateDeferredFragmentRecord(label, objectType, objectValue, - fragmentSelectionSet, path, asyncRecord)}. - - Append {deferredFragmentRecord} to {subsequentPayloads}. + - Call {ExecuteDeferredFragment(label, objectType, objectValue, + fragmentSelectionSet, path, asyncRecord, subsequentPayloads)}. - Continue with the next {selection} in {selectionSet}. - Let {fragmentGroupedFieldSet} be the result of calling {CollectFields(objectType, objectValue, fragmentSelectionSet, @@ -690,50 +679,31 @@ An Async Payload Record is either a Deferred Fragment Record or a Stream Record. All Async Payload Records are structures containing: - {label}: value derived from the corresponding `@defer` or `@stream` directive. -- {parentRecord}: optionally an Async Payload Record. +- {path}: a list of field names and indices from root to the location of the + corresponding `@defer` or `@stream` directive. +- {iterator}: The underlying iterator if created from a `@stream` directive. - {errors}: a list of field errors encountered during execution. - {dataExecution}: A result that can notify when the corresponding execution has completed. -- {path}: a list of field names and indices from root to the location of the - corresponding `@defer` or `@stream` directive. - -#### Deferred Fragment Record - -Let {deferredFragmentRecord} be an inline fragment or fragment spread with -`@defer` provided. - -Deferred Fragment Record is a structure containing all the entries of Async -Payload Record, and additionally: -- {objectType}: of the {deferredFragmentRecord}. -- {objectValue}: of the {deferredFragmentRecord}. -- {fragmentSelectionSet}: the top level selection set of - {deferredFragmentRecord}. +#### Execute Deferred Fragment -CreateDeferredFragmentRecord(label, objectType, objectValue, -fragmentSelectionSet, path, parentRecord): +ExecuteDeferredFragment(label, objectType, objectValue, fragmentSelectionSet, +path, variableValues, parentRecord, subsequentPayloads): -- Construct a deferred fragment record based on the parameters passed in. -- Initialize {errors} to an empty list. - -ResolveDeferredFragmentRecord(deferredFragmentRecord, variableValues, -subsequentPayloads): - -- Let {label}, {objectType}, {objectValue}, {fragmentSelectionSet}, {path}, - {parentRecord} be the corresponding fields in the deferred fragment record - structure. +- Let {deferRecord} be an async payload record created from {label} and {path}. +- Initialize {errors} on {deferRecord} to an empty list. - Let {dataExecution} be the asynchronous future value of: - Let {payload} be the result of {ExecuteSelectionSet(fragmentSelectionSet, objectType, objectValue, variableValues, path, subsequentPayloads, - deferredFragmentRecord)}. + deferRecord)}. - If {parentRecord} is defined: - Wait for the result of {dataExecution} on {parentRecord}. + - Add an entry to {payload} named `label` with the value {label}. + - Add an entry to {payload} named `path` with the value {path}. - Return {payload}. - Set {dataExecution} on {deferredFragmentRecord}. -- Let {payload} be the result of waiting for {dataExecution}. -- Add an entry to {payload} named `label` with the value {label}. -- Add an entry to {payload} named `path` with the value {path}. -- Return {payload}. +- Append {deferRecord} to {subsequentPayloads}. ## Executing Fields @@ -877,28 +847,14 @@ field, value completion iterates over the iterator until the number of items yield by the iterator satisfies `initialCount` specified on the `@stream` directive. -#### Stream Record - -Let {streamField} be a list field with a `@stream` directive provided. - -A Stream Record is a structure containing all the entries of Async Payload -Record, and additionally: - -- {iterator}: created by {ResolveFieldGenerator}. -- {index}: indicating the position of the item in the complete list. -- {fields}: the group of fields grouped by CollectFields() for {streamField}. -- {innerType}: inner type of {streamField}'s type. - -CreateStreamRecord(label, iterator, index, fields, innerType, path, -parentRecord): - -- Construct a stream record based on the parameters passed in. -- Initialize {errors} to an empty list. +#### Execute Stream Field -ResolveStreamRecord(streamRecord, variableValues, subsequentPayloads): +ExecuteStreamRecord(label, iterator, index, fields, innerType, path +streamRecord, variableValues, subsequentPayloads): -- Let {label}, {parentRecord}, {iterator}, {index}, {path}, {fields}, - {innerType} be the correspondent fields on the Stream Record structure. +- Let {streamRecord} be an async payload record created from {label}, {path}, + and {iterator}. +- Initialize {errors} on {streamRecord} to an empty list. - Let {indexPath} be {path} with {index} appended. - Let {dataExecution} be the asynchronous future value of: - Wait for the next item from {iterator}. @@ -908,17 +864,15 @@ ResolveStreamRecord(streamRecord, variableValues, subsequentPayloads): - Let {payload} be the result of calling {CompleteValue(innerType, fields, item, variableValues, indexPath, subsequentPayloads, parentRecord)}. - Increment {index}. - - Let {nextStreamRecord} be the result of calling {CreateStreamRecord(label, - iterator, index, fields, innerType, path, streamRecord)}. - - Append {nextStreamRecord} to {subsequentPayloads}. + - Call {ExecuteStreamRecord(label, iterator, index, fields, innerType, path, + streamRecord, variableValues, subsequentPayloads)}. - If {parentRecord} is defined: - Wait for the result of {dataExecution} on {parentRecord}. + - Add an entry to {payload} named `label` with the value {label}. + - Add an entry to {payload} named `path` with the value {indexPath}. - Return {payload}. - Set {dataExecution} on {streamRecord}. -- Let {payload} be the result of waiting for {dataExecution}. -- Add an entry to {payload} named `label` with the value {label}. -- Add an entry to {payload} named `path` with the value {indexPath}. -- Return {payload}. +- Append {streamRecord} to {subsequentPayloads}. CompleteValue(fieldType, fields, result, variableValues, path, subsequentPayloads, asyncRecord): @@ -956,9 +910,8 @@ subsequentPayloads, asyncRecord): - Increment {index}. - If {streamDirective} was provided and {index} is greater than or equal to {initialCount}: - - Let {streamRecord} be the result of calling {CreateStreamRecord(label, - result, index, fields, innerType, path, asyncRecord)}. - - Append {streamRecord} to {subsequentPayloads}. + - Call {ExecuteStreamRecord(label, result, index, fields, innerType, + path, asyncRecord, subsequentPayloads)}. - Let {result} be {initialItems}. - Exit while loop. - Return {initialItems}. From 337bb87329352c572c809c21287d698f737aac20 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Mon, 20 Dec 2021 16:08:58 -0500 Subject: [PATCH 15/65] Clarify error handling # Conflicts: # spec/Section 6 -- Execution.md --- spec/Section 6 -- Execution.md | 28 +++++++++++++++++++--------- 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index a8062a7ae..1f4c89d50 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -394,9 +394,9 @@ YieldSubsequentPayloads(subsequentPayloads): - Return. - If {subsequentPayloads} is not empty: - Continue to the next record in {subsequentPayloads}. - - If {record} is not the final element in {subsequentPayloads} + - If {record} is not the final element in {subsequentPayloads}: - Add an entry to {payload} named `hasNext` with the value {true}. - - If {record} is the final element in {subsequentPayloads} + - If {record} is the final element in {subsequentPayloads}: - Add an entry to {payload} named `hasNext` with the value {false}. - Yield {payload} @@ -694,11 +694,16 @@ path, variableValues, parentRecord, subsequentPayloads): - Let {deferRecord} be an async payload record created from {label} and {path}. - Initialize {errors} on {deferRecord} to an empty list. - Let {dataExecution} be the asynchronous future value of: - - Let {payload} be the result of {ExecuteSelectionSet(fragmentSelectionSet, + - Let {payload} be an unordered map. + - Let {data} be the result of {ExecuteSelectionSet(fragmentSelectionSet, objectType, objectValue, variableValues, path, subsequentPayloads, deferRecord)}. + - Append any encountered field errors to {errors}. - If {parentRecord} is defined: - Wait for the result of {dataExecution} on {parentRecord}. + - If {errors} is not empty: + - Add an entry to {payload} named `errors` with the value {errors}. + - Add an entry to {payload} named `data` with the value {data}. - Add an entry to {payload} named `label` with the value {label}. - Add an entry to {payload} named `path` with the value {path}. - Return {payload}. @@ -849,8 +854,8 @@ directive. #### Execute Stream Field -ExecuteStreamRecord(label, iterator, index, fields, innerType, path -streamRecord, variableValues, subsequentPayloads): +ExecuteStreamField(label, iterator, index, fields, innerType, path streamRecord, +variableValues, subsequentPayloads): - Let {streamRecord} be an async payload record created from {label}, {path}, and {iterator}. @@ -860,14 +865,19 @@ streamRecord, variableValues, subsequentPayloads): - Wait for the next item from {iterator}. - If an item is not retrieved because {iterator} has completed: - Return {null}. + - Let {payload} be an unordered map. - Let {item} be the item retrieved from {iterator}. - - Let {payload} be the result of calling {CompleteValue(innerType, fields, - item, variableValues, indexPath, subsequentPayloads, parentRecord)}. + - Let {data} be the result of calling {CompleteValue(innerType, fields, item, + variableValues, indexPath, subsequentPayloads, parentRecord)}. + - Append any encountered field errors to {errors}. - Increment {index}. - - Call {ExecuteStreamRecord(label, iterator, index, fields, innerType, path, + - Call {ExecuteStreamField(label, iterator, index, fields, innerType, path, streamRecord, variableValues, subsequentPayloads)}. - If {parentRecord} is defined: - Wait for the result of {dataExecution} on {parentRecord}. + - If {errors} is not empty: + - Add an entry to {payload} named `errors` with the value {errors}. + - Add an entry to {payload} named `data` with the value {data}. - Add an entry to {payload} named `label` with the value {label}. - Add an entry to {payload} named `path` with the value {indexPath}. - Return {payload}. @@ -910,7 +920,7 @@ subsequentPayloads, asyncRecord): - Increment {index}. - If {streamDirective} was provided and {index} is greater than or equal to {initialCount}: - - Call {ExecuteStreamRecord(label, result, index, fields, innerType, + - Call {ExecuteStreamField(label, result, index, fields, innerType, path, asyncRecord, subsequentPayloads)}. - Let {result} be {initialItems}. - Exit while loop. From 2982dec4c429dc8342d138df03d977f169d6ae86 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 30 Dec 2021 12:57:17 -0500 Subject: [PATCH 16/65] add isCompletedIterator to AsyncPayloadRecord to track completed iterators # Conflicts: # spec/Section 6 -- Execution.md --- spec/Section 6 -- Execution.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 1f4c89d50..6ed29aff0 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -387,13 +387,13 @@ YieldSubsequentPayloads(subsequentPayloads): - Let {record} be the first item in {subsequentPayloads} with a completed {dataExecution}. - Remove {record} from {subsequentPayloads}. - - Let {payload} be the completed result returned by {dataExecution}. - - If {payload} is {null}: + - If {isCompletedIterator} on {record} is {true}: - If {subsequentPayloads} is empty: - Yield a map containing a field `hasNext` with the value {false}. - Return. - If {subsequentPayloads} is not empty: - Continue to the next record in {subsequentPayloads}. + - Let {payload} be the completed result returned by {dataExecution}. - If {record} is not the final element in {subsequentPayloads}: - Add an entry to {payload} named `hasNext` with the value {true}. - If {record} is the final element in {subsequentPayloads}: @@ -682,6 +682,8 @@ All Async Payload Records are structures containing: - {path}: a list of field names and indices from root to the location of the corresponding `@defer` or `@stream` directive. - {iterator}: The underlying iterator if created from a `@stream` directive. +- {isCompletedIterator}: a boolean indicating the payload record was generated + from an iterator that has completed. - {errors}: a list of field errors encountered during execution. - {dataExecution}: A result that can notify when the corresponding execution has completed. @@ -864,6 +866,7 @@ variableValues, subsequentPayloads): - Let {dataExecution} be the asynchronous future value of: - Wait for the next item from {iterator}. - If an item is not retrieved because {iterator} has completed: + - Set {isCompletedIterator} to {true} on {streamRecord}. - Return {null}. - Let {payload} be an unordered map. - Let {item} be the item retrieved from {iterator}. From 32fb73b9aefb15417cd77676267e2da252ec2a2f Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Fri, 21 Jan 2022 07:42:28 -0500 Subject: [PATCH 17/65] fix typo --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 6ed29aff0..87b61cec7 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -601,7 +601,7 @@ subsequentPayloads, asyncRecord, visitedFragments): - Append {selection} to the {groupForResponseKey}. - If {selection} is a {FragmentSpread}: - Let {fragmentSpreadName} be the name of {selection}. - - If {fragmentSpreadName} provides the directive `@defer` and it's {if} + - If {fragmentSpreadName} provides the directive `@defer` and its {if} argument is {true} or is a variable in {variableValues} with the value {true}: - Let {deferDirective} be that directive. From 1ff999edcdbac95e9fffcd38f25dee4fb827e31a Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Wed, 2 Feb 2022 14:45:30 -0500 Subject: [PATCH 18/65] deferDirective and visitedFragments --- spec/Section 6 -- Execution.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 87b61cec7..d95c0c0e1 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -605,9 +605,10 @@ subsequentPayloads, asyncRecord, visitedFragments): argument is {true} or is a variable in {variableValues} with the value {true}: - Let {deferDirective} be that directive. - - If {fragmentSpreadName} is in {visitedFragments} and {deferDirective} is - not defined, continue with the next {selection} in {selectionSet}. - - Add {fragmentSpreadName} to {visitedFragments}. + - If {deferDirective} is not defined: + - If {fragmentSpreadName} is in {visitedFragments}, continue with the next + {selection} in {selectionSet}. + - Add {fragmentSpreadName} to {visitedFragments}. - Let {fragment} be the Fragment in the current Document whose name is {fragmentSpreadName}. - If no such {fragment} exists, continue with the next {selection} in From 270b40984f7a536fcc9dafbd11cd87b8677ee485 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Mon, 7 Feb 2022 15:33:51 -0500 Subject: [PATCH 19/65] stream if argument, indexPath -> itemPath --- spec/Section 6 -- Execution.md | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index d95c0c0e1..2228d4af3 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -863,7 +863,7 @@ variableValues, subsequentPayloads): - Let {streamRecord} be an async payload record created from {label}, {path}, and {iterator}. - Initialize {errors} on {streamRecord} to an empty list. -- Let {indexPath} be {path} with {index} appended. +- Let {itemPath} be {path} with {index} appended. - Let {dataExecution} be the asynchronous future value of: - Wait for the next item from {iterator}. - If an item is not retrieved because {iterator} has completed: @@ -872,7 +872,7 @@ variableValues, subsequentPayloads): - Let {payload} be an unordered map. - Let {item} be the item retrieved from {iterator}. - Let {data} be the result of calling {CompleteValue(innerType, fields, item, - variableValues, indexPath, subsequentPayloads, parentRecord)}. + variableValues, itemPath, subsequentPayloads, parentRecord)}. - Append any encountered field errors to {errors}. - Increment {index}. - Call {ExecuteStreamField(label, iterator, index, fields, innerType, path, @@ -883,7 +883,7 @@ variableValues, subsequentPayloads): - Add an entry to {payload} named `errors` with the value {errors}. - Add an entry to {payload} named `data` with the value {data}. - Add an entry to {payload} named `label` with the value {label}. - - Add an entry to {payload} named `path` with the value {indexPath}. + - Add an entry to {payload} named `path` with the value {itemPath}. - Return {payload}. - Set {dataExecution} on {streamRecord}. - Append {streamRecord} to {subsequentPayloads}. @@ -903,7 +903,9 @@ subsequentPayloads, asyncRecord): - If {result} is an iterator: - Let {field} be the first entry in {fields}. - Let {innerType} be the inner type of {fieldType}. - - Let {streamDirective} be the `@stream` directive provided on {field}. + - If {field} provides the directive `@stream` and its {if} argument is + {true} or is a variable in {variableValues} with the value {true} and : + - Let {streamDirective} be that directive. - Let {initialCount} be the value or variable provided to {streamDirective}'s {initialCount} argument. - If {initialCount} is less than zero, raise a _field error_. @@ -912,18 +914,18 @@ subsequentPayloads, asyncRecord): - Let {initialItems} be an empty list - Let {index} be zero. - While {result} is not closed: - - If {streamDirective} was not provided or {index} is not greater than or + - If {streamDirective} is not defined or {index} is not greater than or equal to {initialCount}: - Wait for the next item from {result}. - Let {resultItem} be the item retrieved from {result}. - - Let {indexPath} be {path} with {index} appended. + - Let {itemPath} be {path} with {index} appended. - Let {resolvedItem} be the result of calling {CompleteValue(innerType, - fields, resultItem, variableValues, indexPath, subsequentPayloads, + fields, resultItem, variableValues, itemPath, subsequentPayloads, asyncRecord)}. - Append {resolvedItem} to {initialItems}. - Increment {index}. - - If {streamDirective} was provided and {index} is greater than or equal - to {initialCount}: + - If {streamDirective} is defined and {index} is greater than or equal to + {initialCount}: - Call {ExecuteStreamField(label, result, index, fields, innerType, path, asyncRecord, subsequentPayloads)}. - Let {result} be {initialItems}. @@ -932,9 +934,9 @@ subsequentPayloads, asyncRecord): - If {result} is not a collection of values, raise a _field error_. - Let {innerType} be the inner type of {fieldType}. - Return a list where each list item is the result of calling - {CompleteValue(innerType, fields, resultItem, variableValues, indexPath, + {CompleteValue(innerType, fields, resultItem, variableValues, itemPath, subsequentPayloads, asyncRecord)}, where {resultItem} is each item in - {result} and {indexPath} is {path} with the index of the item appended. + {result} and {itemPath} is {path} with the index of the item appended. - If {fieldType} is a Scalar or Enum type: - Return the result of {CoerceResult(fieldType, result)}. - If {fieldType} is an Object, Interface, or Union type: From 75f2258bfebbc8903b1b9851c28aba563541a64f Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Mon, 7 Feb 2022 15:43:48 -0500 Subject: [PATCH 20/65] Clarify stream only applies to outermost list of multi-dimensional arrays --- spec/Section 6 -- Execution.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 2228d4af3..bf5bd56b1 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -904,7 +904,9 @@ subsequentPayloads, asyncRecord): - Let {field} be the first entry in {fields}. - Let {innerType} be the inner type of {fieldType}. - If {field} provides the directive `@stream` and its {if} argument is - {true} or is a variable in {variableValues} with the value {true} and : + {true} or is a variable in {variableValues} with the value {true} and + {innerType} is the outermost return type of the list type defined for + {field}: - Let {streamDirective} be that directive. - Let {initialCount} be the value or variable provided to {streamDirective}'s {initialCount} argument. From d8c28d18470fe3cce9abbd0c3de4c38a44bca322 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Mon, 7 Mar 2022 16:30:36 -0500 Subject: [PATCH 21/65] =?UTF-8?q?add=20validation=20=E2=80=9CDefer=20And?= =?UTF-8?q?=20Stream=20Directive=20Labels=20Are=20Unique=E2=80=9D?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- spec/Section 5 -- Validation.md | 65 +++++++++++++++++++++++++++++++++ 1 file changed, 65 insertions(+) diff --git a/spec/Section 5 -- Validation.md b/spec/Section 5 -- Validation.md index 8303ce776..018d7b91a 100644 --- a/spec/Section 5 -- Validation.md +++ b/spec/Section 5 -- Validation.md @@ -1556,6 +1556,71 @@ mutation { } ``` +### Defer And Stream Directive Labels Are Unique + +** Formal Specification ** + +- For every {directive} in a document. +- Let {directiveName} be the name of {directive}. +- If {directiveName} is "defer" or "stream": + - For every {argument} in {directive}: + - Let {argumentName} be the name of {argument}. + - Let {argumentValue} be the value passed to {argument}. + - If {argumentName} is "label": + - {argumentValue} must not be a variable. + - Let {labels} be all label values passed to defer or stream directive label + arguments. + - {labels} must be a set of one. + +**Explanatory Text** + +The `@defer` and `@stream` directives each accept an argument "label". This +label may be used by GraphQL clients to uniquely identify response payloads. If +a label is passed, it must not be a variable and it must be unique within all +other `@defer` and `@stream` directives in the document. + +For example the following document is valid: + +```graphql example +{ + dog { + ...fragmentOne + ...fragmentTwo @defer(label: "dogDefer") + } + pets @stream(label: "petStream") { + name + } +} + +fragment fragmentOne on Dog { + name +} + +fragment fragmentTwo on Dog { + owner { + name + } +} +``` + +For example, the following document will not pass validation because the same +label is used in different `@defer` and `@stream` directives.: + +```raw graphql counter-example +{ + dog { + ...fragmentOne @defer(label: "MyLabel") + } + pets @stream(label: "MyLabel") { + name + } +} + +fragment fragmentOne on Dog { + name +} +``` + ### Stream Directives Are Used On List Fields **Formal Specification** From eb3a4e34eb00a87bda39d3460c7ef058f824f828 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Tue, 8 Mar 2022 10:41:32 -0500 Subject: [PATCH 22/65] Clarification on labels --- spec/Section 3 -- Type System.md | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/spec/Section 3 -- Type System.md b/spec/Section 3 -- Type System.md index 9eeea3907..b78ec8189 100644 --- a/spec/Section 3 -- Type System.md +++ b/spec/Section 3 -- Type System.md @@ -2160,10 +2160,11 @@ fragment someFragment on User { - `if: Boolean` - When `true`, fragment _should_ be deferred. When `false`, fragment will not be deferred and data will be included in the initial response. If omitted, defaults to `true`. -- `label: String` - A unique label across all `@defer` and `@stream` directives - in an operation. This label should be used by GraphQL clients to identify the - data from patch responses and associate it with the correct fragments. If - provided, the GraphQL Server must add it to the payload. +- `label: String` - May be used by GraphQL clients to identify the data from + responses and associate it with the corresponding defer directive. If + provided, the GraphQL Server must add it to the corresponding payload. `label` + must be unique label across all `@defer` and `@stream` directives in a + document. `label` must not be provided as a variable. ### @stream @@ -2191,10 +2192,12 @@ query myQuery($shouldStream: Boolean) { - `if: Boolean` - When `true`, field _should_ be streamed. When `false`, the field will not be streamed and all list items will be included in the initial response. If omitted, defaults to `true`. -- `label: String` - A unique label across all `@defer` and `@stream` directives - in an operation. This label should be used by GraphQL clients to identify the - data from patch responses and associate it with the correct fragments. If - provided, the GraphQL Server must add it to the payload. +- `label: String` - May be used by GraphQL clients to identify the data from + responses and associate it with the corresponding stream directive. If + provided, the GraphQL Server must add it to the corresponding payload. `label` + must be unique label across all `@defer` and `@stream` directives in a + document. `label` must not be provided as a variable. + - `initialCount: Int` - The number of list items the server should return as part of the initial response. If omitted, defaults to `0`. A field error will be raised if the value of this argument is less than `0`. From f2b50bfba12fae2fcd7d15a2b732050d469827a9 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Wed, 23 Mar 2022 14:42:50 -0400 Subject: [PATCH 23/65] fix wrong quotes --- spec/Section 3 -- Type System.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/spec/Section 3 -- Type System.md b/spec/Section 3 -- Type System.md index b78ec8189..c543d3b95 100644 --- a/spec/Section 3 -- Type System.md +++ b/spec/Section 3 -- Type System.md @@ -2142,10 +2142,10 @@ delivered in a subsequent response. `@include` and `@skip` take precedence over ```graphql example query myQuery($shouldDefer: Boolean) { - user { - name - ...someFragment @defer(label: 'someLabel', if: $shouldDefer) - } + user { + name + ...someFragment @defer(label: "someLabel", if: $shouldDefer) + } } fragment someFragment on User { id From 92f02f3c7cfd7decce70bcabdfa1473dc9510ac6 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Wed, 23 Mar 2022 14:45:42 -0400 Subject: [PATCH 24/65] remove label/path requirement --- spec/Section 7 -- Response.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index 47ee7e893..690c8a016 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -26,8 +26,8 @@ When the response of the GraphQL operation is an event stream, the first value will be the initial response. All subsequent values may contain `label` and `path` entries. These two entries are used by clients to identify the `@defer` or `@stream` directive from the GraphQL operation that triggered this value to -be returned by the event stream. The combination of these two entries must be -unique across all values returned by the event stream. +be returned by the event stream. When a label is provided, the combination of +these two entries will be unique across all values returned by the event stream. If the response of the GraphQL operation is an event stream, each response map must contain an entry with key `hasNext`. The value of this entry is `true` for From 049bce8cfd397f8a25f4e31ea36ff43fc8c911ff Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 9 Jun 2022 17:26:33 -0500 Subject: [PATCH 25/65] add missing line --- spec/Section 6 -- Execution.md | 1 + 1 file changed, 1 insertion(+) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index bf5bd56b1..476108211 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -30,6 +30,7 @@ request is determined by the result of executing this operation according to the "Executing Operations” section below. ExecuteRequest(schema, document, operationName, variableValues, initialValue): + Note: the execution assumes implementing language supports coroutines. Alternatively, the socket can provide a write buffer pointer to allow {ExecuteRequest()} to directly write payloads into the buffer. From 9a0750069f2e4260c61c3c2f5d59130bca6ebfc6 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 9 Jun 2022 17:28:59 -0500 Subject: [PATCH 26/65] fix ExecuteRequest --- spec/Section 6 -- Execution.md | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 476108211..88c54571c 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -39,13 +39,8 @@ Alternatively, the socket can provide a write buffer pointer to allow - Let {coercedVariableValues} be the result of {CoerceVariableValues(schema, operation, variableValues)}. - If {operation} is a query operation: - - Let {executionResult} be the result of calling {ExecuteQuery(operation, - schema, coercedVariableValues, initialValue, subsequentPayloads)}. - - If {executionResult} is an iterator: - - For each {payload} in {executionResult}: - - Yield {payload}. - - Otherwise: - - Return {executionResult}. + - Return {ExecuteQuery(operation, schema, coercedVariableValues, + initialValue)}. - Otherwise if {operation} is a mutation operation: - Return {ExecuteMutation(operation, schema, coercedVariableValues, initialValue)}. From 7c5e1dacc14d56cf4aadb21d6eaf3441616851b2 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 9 Jun 2022 17:31:06 -0500 Subject: [PATCH 27/65] fix response --- spec/Section 7 -- Response.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index 690c8a016..3743d4f8d 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -12,9 +12,9 @@ the case that any _field error_ was raised on a field and was replaced with A response to a GraphQL request must be a map or an event stream of maps. -If the operation encountered any errors, the response map must contain an entry -with key `errors`. The value of this entry is described in the "Errors" section. -If the request completed without raising any errors, this entry must not be +If the request raised any errors, the response map must contain an entry with +key `errors`. The value of this entry is described in the "Errors" section. If +the request completed without raising any errors, this entry must not be present. If the request included execution, the response map must contain an entry with From 19cb9c33753fb99b2e77beda03f509e1a8948d87 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Wed, 3 Aug 2022 14:41:10 -0400 Subject: [PATCH 28/65] Align deferred fragment field collection with reference implementation --- spec/Section 6 -- Execution.md | 57 ++++++++++++++++++++++++---------- 1 file changed, 40 insertions(+), 17 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 88c54571c..eaebbe3ae 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -411,8 +411,9 @@ subsequentPayloads, asyncRecord): - If {path} is not provided, initialize it to an empty list. - If {subsequentPayloads} is not provided, initialize it to the empty set. -- Let {groupedFieldSet} be the result of {CollectFields(objectType, objectValue, - selectionSet, variableValues, path subsequentPayloads, asyncRecord)}. +- Let {groupedFieldSet} and {deferredGroupedFieldsList} be the result of + {CollectFields(objectType, objectValue, selectionSet, variableValues, path, + asyncRecord)}. - Initialize {resultMap} to an empty ordered map. - For each {groupedFieldSet} as {responseKey} and {fields}: - Let {fieldName} be the name of the first entry in {fields}. Note: This value @@ -423,6 +424,10 @@ subsequentPayloads, asyncRecord): - Let {responseValue} be {ExecuteField(objectType, objectValue, fieldType, fields, variableValues, path, subsequentPayloads, asyncRecord)}. - Set {responseValue} as the value for {responseKey} in {resultMap}. +- For each {deferredGroupFieldSet} and {label} in {deferredGroupedFieldsList} + - Call {ExecuteDeferredFragment(label, objectType, objectValue, + deferredGroupFieldSet, path, variableValues, asyncRecord, + subsequentPayloads)} - Return {resultMap}. Note: {resultMap} is ordered by which fields appear first in the operation. This @@ -574,10 +579,12 @@ is maintained through execution, ensuring that fields appear in the executed response in a stable and predictable order. CollectFields(objectType, objectValue, selectionSet, variableValues, path, -subsequentPayloads, asyncRecord, visitedFragments): +asyncRecord, visitedFragments, deferredGroupedFieldsList): - If {visitedFragments} is not provided, initialize it to the empty set. - Initialize {groupedFields} to an empty ordered map of lists. +- If {deferredGroupedFieldsList} is not provided, initialize it to an empty + list. - For each {selection} in {selectionSet}: - If {selection} provides the directive `@skip`, let {skipDirective} be that directive. @@ -616,13 +623,17 @@ subsequentPayloads, asyncRecord, visitedFragments): - If {deferDirective} is defined: - Let {label} be the value or the variable to {deferDirective}'s {label} argument. - - Call {ExecuteDeferredFragment(label, objectType, objectValue, - fragmentSelectionSet, path, variableValues, asyncRecord, - subsequentPayloads)}. + - Let {deferredGroupedFields} be the result of calling + {CollectFields(objectType, objectValue, fragmentSelectionSet, + variableValues, path, asyncRecord, visitedFragments, + deferredGroupedFieldsList)}. + - Append a record containing {label} and {deferredGroupedFields} to + {deferredGroupedFieldsList}. - Continue with the next {selection} in {selectionSet}. - Let {fragmentGroupedFieldSet} be the result of calling {CollectFields(objectType, objectValue, fragmentSelectionSet, - variableValues, path, subsequentPayloads, asyncRecord, visitedFragments)}. + variableValues, path, asyncRecord, visitedFragments, + deferredGroupedFieldsList)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. @@ -641,19 +652,24 @@ subsequentPayloads, asyncRecord, visitedFragments): {variableValues} with the value {true}: - Let {label} be the value or the variable to {deferDirective}'s {label} argument. - - Call {ExecuteDeferredFragment(label, objectType, objectValue, - fragmentSelectionSet, path, asyncRecord, subsequentPayloads)}. + - Let {deferredGroupedFields} be the result of calling + {CollectFields(objectType, objectValue, fragmentSelectionSet, + variableValues, path, asyncRecord, visitedFragments, + deferredGroupedFieldsList)}. + - Append a record containing {label} and {deferredGroupedFields} to + {deferredGroupedFieldsList}. - Continue with the next {selection} in {selectionSet}. - Let {fragmentGroupedFieldSet} be the result of calling {CollectFields(objectType, objectValue, fragmentSelectionSet, - variableValues, path, subsequentPayloads, asyncRecord, visitedFragments)}. + variableValues, path, asyncRecord, visitedFragments, + deferredGroupedFieldsList)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. - Let {groupForResponseKey} be the list in {groupedFields} for {responseKey}; if no such list exists, create it as an empty list. - Append all items in {fragmentGroup} to {groupForResponseKey}. -- Return {groupedFields}. +- Return {groupedFields} and {deferredGroupedFieldsList}. Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` directives may be applied in either order since they apply commutatively. @@ -687,22 +703,29 @@ All Async Payload Records are structures containing: #### Execute Deferred Fragment -ExecuteDeferredFragment(label, objectType, objectValue, fragmentSelectionSet, -path, variableValues, parentRecord, subsequentPayloads): +ExecuteDeferredFragment(label, objectType, objectValue, groupedFieldSet, path, +variableValues, parentRecord, subsequentPayloads): - Let {deferRecord} be an async payload record created from {label} and {path}. - Initialize {errors} on {deferRecord} to an empty list. - Let {dataExecution} be the asynchronous future value of: - Let {payload} be an unordered map. - - Let {data} be the result of {ExecuteSelectionSet(fragmentSelectionSet, - objectType, objectValue, variableValues, path, subsequentPayloads, - deferRecord)}. + - Initialize {resultMap} to an empty ordered map. + - For each {groupedFieldSet} as {responseKey} and {fields}: + - Let {fieldName} be the name of the first entry in {fields}. Note: This + value is unaffected if an alias is used. + - Let {fieldType} be the return type defined for the field {fieldName} of + {objectType}. + - If {fieldType} is defined: + - Let {responseValue} be {ExecuteField(objectType, objectValue, fieldType, + fields, variableValues, path, subsequentPayloads, asyncRecord)}. + - Set {responseValue} as the value for {responseKey} in {resultMap}. - Append any encountered field errors to {errors}. - If {parentRecord} is defined: - Wait for the result of {dataExecution} on {parentRecord}. - If {errors} is not empty: - Add an entry to {payload} named `errors` with the value {errors}. - - Add an entry to {payload} named `data` with the value {data}. + - Add an entry to {payload} named `data` with the value {resultMap}. - Add an entry to {payload} named `label` with the value {label}. - Add an entry to {payload} named `path` with the value {path}. - Return {payload}. From c747f619421dbeb32772d66cdd126c9bca34ca53 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 18 Aug 2022 14:20:15 -0400 Subject: [PATCH 29/65] spec updates to reflect latest discussions --- spec/Section 6 -- Execution.md | 64 ++++++++++++++---------- spec/Section 7 -- Response.md | 91 +++++++++++++++++++++++++--------- 2 files changed, 107 insertions(+), 48 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index eaebbe3ae..b33380c8e 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -144,10 +144,10 @@ ExecuteQuery(query, schema, variableValues, initialValue): - If {subsequentPayloads} is empty: - Return an unordered map containing {data} and {errors}. - If {subsequentPayloads} is not empty: - - Yield an unordered map containing {data}, {errors}, and an entry named - {hasNext} with the value {true}. + - Let {intialResponse} be an unordered map containing {data}, {errors}, and an + entry named {hasNext} with the value {true}. - Let {iterator} be the result of running - {YieldSubsequentPayloads(subsequentPayloads)}. + {YieldSubsequentPayloads(intialResponse, subsequentPayloads)}. - For each {payload} yielded by {iterator}: - If a termination signal is received: - Send a termination signal to {iterator}. @@ -178,10 +178,10 @@ ExecuteMutation(mutation, schema, variableValues, initialValue): - If {subsequentPayloads} is empty: - Return an unordered map containing {data} and {errors}. - If {subsequentPayloads} is not empty: - - Yield an unordered map containing {data}, {errors}, and an entry named - {hasNext} with the value {true}. + - Let {intialResponse} be an unordered map containing {data}, {errors}, and an + entry named {hasNext} with the value {true}. - Let {iterator} be the result of running - {YieldSubsequentPayloads(subsequentPayloads)}. + {YieldSubsequentPayloads(intialResponse, subsequentPayloads)}. - For each {payload} yielded by {iterator}: - If a termination signal is received: - Send a termination signal to {iterator}. @@ -341,10 +341,10 @@ ExecuteSubscriptionEvent(subscription, schema, variableValues, initialValue): - If {subsequentPayloads} is empty: - Return an unordered map containing {data} and {errors}. - If {subsequentPayloads} is not empty: - - Yield an unordered map containing {data}, {errors}, and an entry named - {hasNext} with the value {true}. + - Let {intialResponse} be an unordered map containing {data}, {errors}, and an + entry named {hasNext} with the value {true}. - Let {iterator} be the result of running - {YieldSubsequentPayloads(subsequentPayloads)}. + {YieldSubsequentPayloads(intialResponse, subsequentPayloads)}. - For each {payload} yielded by {iterator}: - If a termination signal is received: - Send a termination signal to {iterator}. @@ -372,29 +372,42 @@ If an operation contains subsequent payload records resulting from `@stream` or `@defer` directives, the {YieldSubsequentPayloads} algorithm defines how the payloads should be processed. -YieldSubsequentPayloads(subsequentPayloads): +YieldSubsequentPayloads(intialResponse, subsequentPayloads): +- Let {initialRecords} be any items in {subsequentPayloads} with a completed + {dataExecution}. +- Initialize {initialIncremental} to an empty list. +- For each {record} in {initialRecords}: + - Remove {record} from {subsequentPayloads}. + - If {isCompletedIterator} on {record} is {true}: + - Continue to the next record in {records}. + - Let {payload} be the completed result returned by {dataExecution}. + - Append {payload} to {initialIncremental}. +- If {initialIncremental} is not empty: + - Add an entry to {intialResponse} named `incremental` containing the value + {incremental}. +- Yield {intialResponse}. - While {subsequentPayloads} is not empty: - If a termination signal is received: - For each {record} in {subsequentPayloads}: - If {record} contains {iterator}: - Send a termination signal to {iterator}. - Return. -- Let {record} be the first item in {subsequentPayloads} with a completed +- Wait for at least one record in {subsequentPayloads} to have a completed {dataExecution}. - - Remove {record} from {subsequentPayloads}. - - If {isCompletedIterator} on {record} is {true}: - - If {subsequentPayloads} is empty: - - Yield a map containing a field `hasNext` with the value {false}. - - Return. - - If {subsequentPayloads} is not empty: - - Continue to the next record in {subsequentPayloads}. - - Let {payload} be the completed result returned by {dataExecution}. - - If {record} is not the final element in {subsequentPayloads}: - - Add an entry to {payload} named `hasNext` with the value {true}. - - If {record} is the final element in {subsequentPayloads}: - - Add an entry to {payload} named `hasNext` with the value {false}. - - Yield {payload} +- Let {subsequentResponse} be an unordered map with an entry {incremental} + initialized to an empty list. +- Let {records} be the items in {subsequentPayloads} with a completed + {dataExecution}. + - For each {record} in {records}: + - Remove {record} from {subsequentPayloads}. + - If {isCompletedIterator} on {record} is {true}: + - Continue to the next record in {records}. + - Let {payload} be the completed result returned by {dataExecution}. + - Append {payload} to {incremental}. + - If {subsequentPayloads} is empty: + - Add an entry to {response} named `hasNext` with the value {false}. + - Yield {response} ## Executing Selection Sets @@ -900,7 +913,8 @@ variableValues, subsequentPayloads): - Wait for the result of {dataExecution} on {parentRecord}. - If {errors} is not empty: - Add an entry to {payload} named `errors` with the value {errors}. - - Add an entry to {payload} named `data` with the value {data}. + - Add an entry to {payload} named `items` with a list containing the value + {data}. - Add an entry to {payload} named `label` with the value {label}. - Add an entry to {payload} named `path` with the value {itemPath}. - Return {payload}. diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index 3743d4f8d..b2fe30a9f 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -23,11 +23,14 @@ request failed before execution, due to a syntax error, missing information, or validation error, this entry must not be present. When the response of the GraphQL operation is an event stream, the first value -will be the initial response. All subsequent values may contain `label` and -`path` entries. These two entries are used by clients to identify the `@defer` -or `@stream` directive from the GraphQL operation that triggered this value to -be returned by the event stream. When a label is provided, the combination of -these two entries will be unique across all values returned by the event stream. +will be the initial response. All subsequent values may contain an `incremental` +entry, containing a list of Defer or Stream responses. + +The `label` and `path` entries on Defer and Stream responses are used by clients +to identify the `@defer` or `@stream` directive from the GraphQL operation that +triggered this response to be included in an `incremental` entry on a value +returned by the event stream. When a label is provided, the combination of these +two entries will be unique across all values returned by the event stream. If the response of the GraphQL operation is an event stream, each response map must contain an entry with key `hasNext`. The value of this entry is `true` for @@ -45,11 +48,13 @@ set, must have a map as its value. This entry is reserved for implementors to extend the protocol however they see fit, and hence there are no additional restrictions on its contents. When the response of the GraphQL operation is an event stream, implementors may send subsequent payloads containing only -`hasNext` and `extensions` entries. +`hasNext` and `extensions` entries. Defer and Stream responses may also contain +an entry with the key `extensions`, also reserved for implementors to extend the +protocol however they see fit. To ensure future changes to the protocol do not break existing services and clients, the top level response map must not contain any entries other than the -three described above. +five described above. Note: When `errors` is present in the response, it may be helpful for it to appear first when serialized to make it more clear when errors are present in a @@ -62,11 +67,6 @@ requested operation. If the operation was a query, this output will be an object of the query root operation type; if the operation was a mutation, this output will be an object of the mutation root operation type. -If the result of the operation is an event stream, the `data` entry in -subsequent values will be of the type of a particular field in the GraphQL -result. The adjacent `path` field will contain the path segments of the field -this data is associated with. - If an error was raised before execution begins, the `data` entry should not be present in the result. @@ -263,7 +263,43 @@ discouraged. } ``` -## Path +### Incremental + +The `incremental` entry in the response is a non-empty list of Defer or Stream +responses. If the response of the GraphQL operation is an event stream, this +field may appear on both the initial and subsequent values. + +#### Stream response + +A stream response is a map that may appear as an item in the `incremental` entry +of a response. A stream response is the result of an associated `@stream` +directive in the operation. A stream response must contain `items` and `path` +entries and may contain `label`, `errors`, and `extensions` entries. + +##### Items + +The `items` entry in a stream response is a list of results from the execution +of the associated @stream directive. This output will be a list of the same type +of the field with the associated `@stream` directive. If `items` is set to +`null`, it indicates that an error has caused a `null` to bubble up to a field +higher than the list field with the associated `@stream` directive. + +#### Defer response + +A defer response is a map that may appear as an item in the `incremental` entry +of a response. A defer response is the result of an associated `@defer` +directive in the operation. A defer response must contain `data` and `path` +entries and may contain `label`, `errors`, and `extensions` entries. + +##### Data + +The `data` entry in a Defer response will be of the type of a particular field +in the GraphQL result. The adjacent `path` field will contain the path segments +of the field this data is associated with. If `data` is set to `null`, it +indicates that an error has caused a `null` to bubble up to a field higher than +the field that contains the fragment with the associated `@defer` directive. + +#### Path A `path` field allows for the association to a particular field in a GraphQL result. This field should be a list of path segments starting at the root of the @@ -273,21 +309,30 @@ indices should be 0-indexed integers. If the path is associated to an aliased field, the path should use the aliased name, since it represents a path in the response, not in the request. -When the `path` field is present on a GraphQL response, it indicates that the -`data` field is not the root query or mutation result, but is rather associated -to a particular field in the root result. +When the `path` field is present on a Stream response, it indicates that the +`items` field represents the partial result of the list field containing the +corresponding `@stream` directive. All but the non-final path segments must +refer to the location of the list field containing the corresponding `@stream` +directive. The final segment of the path list must be a 0-indexed integer. This +integer indicates that this result is set at a range, where the beginning of the +range is at the index of this integer, and the length of the range is the length +of the data. + +When the `path` field is present on a Defer response, it indicates that the +`data` field represents the result of the fragment containing the corresponding +`@defer` directive. The path segments must point to the location of the result +of the field containing the associated `@defer` directive. When the `path` field is present on an "Error result", it indicates the response field which experienced the error. -## Label +#### Label -If the response of the GraphQL operation is an event stream, subsequent values -may contain a string field `label`. This `label` is the same label passed to the -`@defer` or `@stream` directive that triggered this value. This allows clients -to identify which `@defer` or `@stream` directive is associated with this value. -`label` will not be present if the corresponding `@defer` or `@stream` directive -is not passed a `label` argument. +Stream and Defer responses may contain a string field `label`. This `label` is +the same label passed to the `@defer` or `@stream` directive associated with the +response. This allows clients to identify which `@defer` or `@stream` directive +is associated with this value. `label` will not be present if the corresponding +`@defer` or `@stream` directive is not passed a `label` argument. ## Serialization Format From 6f3c715785e1cd23d8e9d04d5d84554d2f1d5a15 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 18 Aug 2022 14:26:33 -0400 Subject: [PATCH 30/65] Note about mutation execution order --- spec/Section 6 -- Execution.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index b33380c8e..01e82ba43 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -554,6 +554,11 @@ A correct executor must generate the following result for that selection set: } ``` +When subsections contain a `@stream` or `@defer` directive, these subsections +are no longer required to execute serially. Exeuction of the deferred or +streamed sections of the subsection may be executed in parallel, as defined in +{ExecuteStreamField} and {ExecuteDeferredFragment}. + ### Field Collection Before execution, the selection set is converted to a grouped field set by From 7c9ea0abf1d871ead3abcc87bbbc2e5764c29d4a Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 18 Aug 2022 14:38:06 -0400 Subject: [PATCH 31/65] minor change for uniqueness --- spec/Section 7 -- Response.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index b2fe30a9f..32dedbe14 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -30,7 +30,8 @@ The `label` and `path` entries on Defer and Stream responses are used by clients to identify the `@defer` or `@stream` directive from the GraphQL operation that triggered this response to be included in an `incremental` entry on a value returned by the event stream. When a label is provided, the combination of these -two entries will be unique across all values returned by the event stream. +two entries will be unique across all Defer and Stream responses returned in the +event stream. If the response of the GraphQL operation is an event stream, each response map must contain an entry with key `hasNext`. The value of this entry is `true` for From d84939ea3390beb51725527c0d7998c5c7667617 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 18 Aug 2022 16:57:09 -0400 Subject: [PATCH 32/65] fix typos --- spec/Section 6 -- Execution.md | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 01e82ba43..b5c8db97b 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -144,10 +144,10 @@ ExecuteQuery(query, schema, variableValues, initialValue): - If {subsequentPayloads} is empty: - Return an unordered map containing {data} and {errors}. - If {subsequentPayloads} is not empty: - - Let {intialResponse} be an unordered map containing {data}, {errors}, and an - entry named {hasNext} with the value {true}. + - Let {initialResponse} be an unordered map containing {data}, {errors}, and + an entry named {hasNext} with the value {true}. - Let {iterator} be the result of running - {YieldSubsequentPayloads(intialResponse, subsequentPayloads)}. + {YieldSubsequentPayloads(initialResponse, subsequentPayloads)}. - For each {payload} yielded by {iterator}: - If a termination signal is received: - Send a termination signal to {iterator}. @@ -178,10 +178,10 @@ ExecuteMutation(mutation, schema, variableValues, initialValue): - If {subsequentPayloads} is empty: - Return an unordered map containing {data} and {errors}. - If {subsequentPayloads} is not empty: - - Let {intialResponse} be an unordered map containing {data}, {errors}, and an - entry named {hasNext} with the value {true}. + - Let {initialResponse} be an unordered map containing {data}, {errors}, and + an entry named {hasNext} with the value {true}. - Let {iterator} be the result of running - {YieldSubsequentPayloads(intialResponse, subsequentPayloads)}. + {YieldSubsequentPayloads(initialResponse, subsequentPayloads)}. - For each {payload} yielded by {iterator}: - If a termination signal is received: - Send a termination signal to {iterator}. @@ -341,10 +341,10 @@ ExecuteSubscriptionEvent(subscription, schema, variableValues, initialValue): - If {subsequentPayloads} is empty: - Return an unordered map containing {data} and {errors}. - If {subsequentPayloads} is not empty: - - Let {intialResponse} be an unordered map containing {data}, {errors}, and an - entry named {hasNext} with the value {true}. + - Let {initialResponse} be an unordered map containing {data}, {errors}, and + an entry named {hasNext} with the value {true}. - Let {iterator} be the result of running - {YieldSubsequentPayloads(intialResponse, subsequentPayloads)}. + {YieldSubsequentPayloads(initialResponse, subsequentPayloads)}. - For each {payload} yielded by {iterator}: - If a termination signal is received: - Send a termination signal to {iterator}. @@ -372,7 +372,7 @@ If an operation contains subsequent payload records resulting from `@stream` or `@defer` directives, the {YieldSubsequentPayloads} algorithm defines how the payloads should be processed. -YieldSubsequentPayloads(intialResponse, subsequentPayloads): +YieldSubsequentPayloads(initialResponse, subsequentPayloads): - Let {initialRecords} be any items in {subsequentPayloads} with a completed {dataExecution}. @@ -384,9 +384,9 @@ YieldSubsequentPayloads(intialResponse, subsequentPayloads): - Let {payload} be the completed result returned by {dataExecution}. - Append {payload} to {initialIncremental}. - If {initialIncremental} is not empty: - - Add an entry to {intialResponse} named `incremental` containing the value + - Add an entry to {initialResponse} named `incremental` containing the value {incremental}. -- Yield {intialResponse}. +- Yield {initialResponse}. - While {subsequentPayloads} is not empty: - If a termination signal is received: - For each {record} in {subsequentPayloads}: @@ -555,7 +555,7 @@ A correct executor must generate the following result for that selection set: ``` When subsections contain a `@stream` or `@defer` directive, these subsections -are no longer required to execute serially. Exeuction of the deferred or +are no longer required to execute serially. Execution of the deferred or streamed sections of the subsection may be executed in parallel, as defined in {ExecuteStreamField} and {ExecuteDeferredFragment}. From 1ad7e9cefbab501c74ac4cc53764e084eb432928 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Tue, 23 Aug 2022 12:25:56 -0400 Subject: [PATCH 33/65] if: Boolean! = true --- spec/Section 3 -- Type System.md | 21 ++++++++++++--------- 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/spec/Section 3 -- Type System.md b/spec/Section 3 -- Type System.md index c543d3b95..4770e9ccb 100644 --- a/spec/Section 3 -- Type System.md +++ b/spec/Section 3 -- Type System.md @@ -2128,7 +2128,7 @@ scalar UUID @specifiedBy(url: "https://tools.ietf.org/html/rfc4122") ```graphql directive @defer( label: String - if: Boolean + if: Boolean! = true ) on FRAGMENT_SPREAD | INLINE_FRAGMENT ``` @@ -2157,9 +2157,9 @@ fragment someFragment on User { #### @defer Arguments -- `if: Boolean` - When `true`, fragment _should_ be deferred. When `false`, - fragment will not be deferred and data will be included in the initial - response. If omitted, defaults to `true`. +- `if: Boolean! = true` - When `true`, fragment _should_ be deferred. When + `false`, fragment will not be deferred and data will be included in the + initial response. If omitted, defaults to `true`. - `label: String` - May be used by GraphQL clients to identify the data from responses and associate it with the corresponding defer directive. If provided, the GraphQL Server must add it to the corresponding payload. `label` @@ -2169,7 +2169,11 @@ fragment someFragment on User { ### @stream ```graphql -directive @stream(label: String, initialCount: Int = 0, if: Boolean) on FIELD +directive @stream( + label: String + if: Boolean! = true + initialCount: Int = 0 +) on FIELD ``` The `@stream` directive may be provided for a field of `List` type so that the @@ -2189,15 +2193,14 @@ query myQuery($shouldStream: Boolean) { #### @stream Arguments -- `if: Boolean` - When `true`, field _should_ be streamed. When `false`, the - field will not be streamed and all list items will be included in the initial - response. If omitted, defaults to `true`. +- `if: Boolean! = true` - When `true`, field _should_ be streamed. When `false`, + the field will not be streamed and all list items will be included in the + initial response. If omitted, defaults to `true`. - `label: String` - May be used by GraphQL clients to identify the data from responses and associate it with the corresponding stream directive. If provided, the GraphQL Server must add it to the corresponding payload. `label` must be unique label across all `@defer` and `@stream` directives in a document. `label` must not be provided as a variable. - - `initialCount: Int` - The number of list items the server should return as part of the initial response. If omitted, defaults to `0`. A field error will be raised if the value of this argument is less than `0`. From 4b6554ecc18e43d15462979717ef91001c45e6e3 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Tue, 23 Aug 2022 19:59:38 -0400 Subject: [PATCH 34/65] address pr feedback --- spec/Section 6 -- Execution.md | 32 ++++++++++++++++++-------------- 1 file changed, 18 insertions(+), 14 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index b5c8db97b..5c09a1add 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -388,26 +388,30 @@ YieldSubsequentPayloads(initialResponse, subsequentPayloads): {incremental}. - Yield {initialResponse}. - While {subsequentPayloads} is not empty: -- If a termination signal is received: - - For each {record} in {subsequentPayloads}: - - If {record} contains {iterator}: - - Send a termination signal to {iterator}. - - Return. -- Wait for at least one record in {subsequentPayloads} to have a completed - {dataExecution}. -- Let {subsequentResponse} be an unordered map with an entry {incremental} - initialized to an empty list. -- Let {records} be the items in {subsequentPayloads} with a completed - {dataExecution}. + - If a termination signal is received: + - For each {record} in {subsequentPayloads}: + - If {record} contains {iterator}: + - Send a termination signal to {iterator}. + - Return. + - Wait for at least one record in {subsequentPayloads} to have a completed + {dataExecution}. + - Let {subsequentResponse} be an unordered map with an entry {incremental} + initialized to an empty list. + - Let {records} be the items in {subsequentPayloads} with a completed + {dataExecution}. - For each {record} in {records}: - Remove {record} from {subsequentPayloads}. - If {isCompletedIterator} on {record} is {true}: - Continue to the next record in {records}. - Let {payload} be the completed result returned by {dataExecution}. - - Append {payload} to {incremental}. + - Append {payload} to the {incremental} entry on {subsequentResponse}. - If {subsequentPayloads} is empty: - - Add an entry to {response} named `hasNext` with the value {false}. - - Yield {response} + - Add an entry to {subsequentResponse} named `hasNext` with the value + {false}. + - Otherwise, if {subsequentPayloads} is not empty: + - Add an entry to {subsequentResponse} named `hasNext` with the value + {true}. + - Yield {subsequentResponse} ## Executing Selection Sets From 9103fdb8b5473abe63d086dafb7181cdb25bf2fe Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Tue, 23 Aug 2022 20:02:18 -0400 Subject: [PATCH 35/65] clarify null behavior of if --- spec/Section 3 -- Type System.md | 4 ++-- spec/Section 6 -- Execution.md | 37 ++++++++++++++++---------------- 2 files changed, 21 insertions(+), 20 deletions(-) diff --git a/spec/Section 3 -- Type System.md b/spec/Section 3 -- Type System.md index 4770e9ccb..3cc36ee3a 100644 --- a/spec/Section 3 -- Type System.md +++ b/spec/Section 3 -- Type System.md @@ -2159,7 +2159,7 @@ fragment someFragment on User { - `if: Boolean! = true` - When `true`, fragment _should_ be deferred. When `false`, fragment will not be deferred and data will be included in the - initial response. If omitted, defaults to `true`. + initial response. Defaults to `true` when omitted or null. - `label: String` - May be used by GraphQL clients to identify the data from responses and associate it with the corresponding defer directive. If provided, the GraphQL Server must add it to the corresponding payload. `label` @@ -2195,7 +2195,7 @@ query myQuery($shouldStream: Boolean) { - `if: Boolean! = true` - When `true`, field _should_ be streamed. When `false`, the field will not be streamed and all list items will be included in the - initial response. If omitted, defaults to `true`. + initial response. Defaults to `true` when omitted or null. - `label: String` - May be used by GraphQL clients to identify the data from responses and associate it with the corresponding stream directive. If provided, the GraphQL Server must add it to the corresponding payload. `label` diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 5c09a1add..dbac8bb4c 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -627,8 +627,8 @@ asyncRecord, visitedFragments, deferredGroupedFieldsList): - If {selection} is a {FragmentSpread}: - Let {fragmentSpreadName} be the name of {selection}. - If {fragmentSpreadName} provides the directive `@defer` and its {if} - argument is {true} or is a variable in {variableValues} with the value - {true}: + argument is not {false} and is not a variable in {variableValues} with the + value {false}: - Let {deferDirective} be that directive. - If {deferDirective} is not defined: - If {fragmentSpreadName} is in {visitedFragments}, continue with the next @@ -668,19 +668,20 @@ asyncRecord, visitedFragments, deferredGroupedFieldsList): fragmentType)} is false, continue with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {selection}. - - If {InlineFragment} provides the directive `@defer`, let {deferDirective} - be that directive. - - If {deferDirective}'s {if} argument is {true} or is a variable in - {variableValues} with the value {true}: - - Let {label} be the value or the variable to {deferDirective}'s {label} - argument. - - Let {deferredGroupedFields} be the result of calling - {CollectFields(objectType, objectValue, fragmentSelectionSet, - variableValues, path, asyncRecord, visitedFragments, - deferredGroupedFieldsList)}. - - Append a record containing {label} and {deferredGroupedFields} to - {deferredGroupedFieldsList}. - - Continue with the next {selection} in {selectionSet}. + - If {InlineFragment} provides the directive `@defer` and its {if} argument + is not {false} and is not a variable in {variableValues} with the value + {false}: + - Let {deferDirective} be that directive. + - If {deferDirective} is defined: + - Let {label} be the value or the variable to {deferDirective}'s {label} + argument. + - Let {deferredGroupedFields} be the result of calling + {CollectFields(objectType, objectValue, fragmentSelectionSet, + variableValues, path, asyncRecord, visitedFragments, + deferredGroupedFieldsList)}. + - Append a record containing {label} and {deferredGroupedFields} to + {deferredGroupedFieldsList}. + - Continue with the next {selection} in {selectionSet}. - Let {fragmentGroupedFieldSet} be the result of calling {CollectFields(objectType, objectValue, fragmentSelectionSet, variableValues, path, asyncRecord, visitedFragments, @@ -945,9 +946,9 @@ subsequentPayloads, asyncRecord): - If {result} is an iterator: - Let {field} be the first entry in {fields}. - Let {innerType} be the inner type of {fieldType}. - - If {field} provides the directive `@stream` and its {if} argument is - {true} or is a variable in {variableValues} with the value {true} and - {innerType} is the outermost return type of the list type defined for + - If {field} provides the directive `@stream` and its {if} argument is not + {false} and is not a variable in {variableValues} with the value {false} + and {innerType} is the outermost return type of the list type defined for {field}: - Let {streamDirective} be that directive. - Let {initialCount} be the value or variable provided to From 3944d05cdcfbb4963200a6fdcd39b6bc22b8d800 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 8 Sep 2022 11:03:48 -0400 Subject: [PATCH 36/65] Add error boundary behavior --- spec/Section 6 -- Execution.md | 36 +++++++++++++++++++++++++++++----- 1 file changed, 31 insertions(+), 5 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index dbac8bb4c..491f65ce0 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -456,8 +456,11 @@ If during {ExecuteSelectionSet()} a field with a non-null {fieldType} raises a _field error_ then that error must propagate to this entire selection set, either resolving to {null} if allowed or further propagated to a parent field. -If this occurs, any sibling fields which have not yet executed or have not yet -yielded a value may be cancelled to avoid unnecessary work. +If this occurs, any defer or stream executions with a path that starts with the +same path as the resolved {null} must not return their results to the client. +These defer or stream executions or any sibling fields which have not yet +executed or have not yet yielded a value may be cancelled to avoid unnecessary +work. Note: See [Handling Field Errors](#sec-Handling-Field-Errors) for more about this behavior. @@ -748,7 +751,11 @@ variableValues, parentRecord, subsequentPayloads): - Wait for the result of {dataExecution} on {parentRecord}. - If {errors} is not empty: - Add an entry to {payload} named `errors` with the value {errors}. - - Add an entry to {payload} named `data` with the value {resultMap}. + - If a field error was raised, causing a {null} to be propagated to + {responseValue}, and {objectType} is a Non-Nullable type: + - Add an entry to {payload} named `data` with the value {null}. + - Otherwise: + - Add an entry to {payload} named `data` with the value {resultMap}. - Add an entry to {payload} named `label` with the value {label}. - Add an entry to {payload} named `path` with the value {path}. - Return {payload}. @@ -923,8 +930,12 @@ variableValues, subsequentPayloads): - Wait for the result of {dataExecution} on {parentRecord}. - If {errors} is not empty: - Add an entry to {payload} named `errors` with the value {errors}. - - Add an entry to {payload} named `items` with a list containing the value - {data}. + - If a field error was raised, causing a {null} to be propagated to {data}, + and {innerType} is a Non-Nullable type: + - Add an entry to {payload} named `items` with the value {null}. + - Otherwise: + - Add an entry to {payload} named `items` with a list containing the value + {data}. - Add an entry to {payload} named `label` with the value {label}. - Add an entry to {payload} named `path` with the value {itemPath}. - Return {payload}. @@ -1100,6 +1111,21 @@ resolves to {null}, then the entire list must resolve to {null}. If the `List` type is also wrapped in a `Non-Null`, the field error continues to propagate upwards. +When a field error is raised inside `ExecuteDeferredFragment` or +`ExecuteStreamField`, the defer and stream payloads act as error boundaries. +That is, the null resulting from a `Non-Null` type cannot propagate outside of +the boundary of the defer or stream payload. + +If a fragment with the `defer` directive is spread on a Non-Nullable object +type, and a field error has caused a {null} to propagate to the associated +object, the {null} should not propagate any further, and the associated Defer +Payload's `data` field must be set to {null}. + +If the `stream` directive is present on a list field with a Non-Nullable inner +type, and a field error has caused a {null} to propagate to the list item, the +{null} should not propagate any further, and the associated Stream Payload's +`item` field must be set to {null}. + If all fields from the root of the request to the source of the field error return `Non-Null` types, then the {"data"} entry in the response should be {null}. From 90b31ae15a372c87b78b3cd2c4a76897b6ad83e0 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 8 Sep 2022 11:22:29 -0400 Subject: [PATCH 37/65] defer/stream response => payload --- spec/Section 7 -- Response.md | 54 +++++++++++++++++------------------ 1 file changed, 27 insertions(+), 27 deletions(-) diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index 32dedbe14..b8db8364e 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -24,13 +24,13 @@ validation error, this entry must not be present. When the response of the GraphQL operation is an event stream, the first value will be the initial response. All subsequent values may contain an `incremental` -entry, containing a list of Defer or Stream responses. +entry, containing a list of Defer or Stream payloads. -The `label` and `path` entries on Defer and Stream responses are used by clients +The `label` and `path` entries on Defer and Stream payloads are used by clients to identify the `@defer` or `@stream` directive from the GraphQL operation that triggered this response to be included in an `incremental` entry on a value returned by the event stream. When a label is provided, the combination of these -two entries will be unique across all Defer and Stream responses returned in the +two entries will be unique across all Defer and Stream payloads returned in the event stream. If the response of the GraphQL operation is an event stream, each response map @@ -49,7 +49,7 @@ set, must have a map as its value. This entry is reserved for implementors to extend the protocol however they see fit, and hence there are no additional restrictions on its contents. When the response of the GraphQL operation is an event stream, implementors may send subsequent payloads containing only -`hasNext` and `extensions` entries. Defer and Stream responses may also contain +`hasNext` and `extensions` entries. Defer and Stream payloads may also contain an entry with the key `extensions`, also reserved for implementors to extend the protocol however they see fit. @@ -267,38 +267,38 @@ discouraged. ### Incremental The `incremental` entry in the response is a non-empty list of Defer or Stream -responses. If the response of the GraphQL operation is an event stream, this +payloads. If the response of the GraphQL operation is an event stream, this field may appear on both the initial and subsequent values. -#### Stream response +#### Stream payload -A stream response is a map that may appear as an item in the `incremental` entry -of a response. A stream response is the result of an associated `@stream` -directive in the operation. A stream response must contain `items` and `path` +A stream payload is a map that may appear as an item in the `incremental` entry +of a response. A stream payload is the result of an associated `@stream` +directive in the operation. A stream payload must contain `items` and `path` entries and may contain `label`, `errors`, and `extensions` entries. ##### Items -The `items` entry in a stream response is a list of results from the execution -of the associated @stream directive. This output will be a list of the same type -of the field with the associated `@stream` directive. If `items` is set to -`null`, it indicates that an error has caused a `null` to bubble up to a field -higher than the list field with the associated `@stream` directive. +The `items` entry in a stream payload is a list of results from the execution of +the associated @stream directive. This output will be a list of the same type of +the field with the associated `@stream` directive. If `items` is set to `null`, +it indicates that an error has caused a `null` to bubble up to a field higher +than the list field with the associated `@stream` directive. -#### Defer response +#### Defer payload -A defer response is a map that may appear as an item in the `incremental` entry -of a response. A defer response is the result of an associated `@defer` -directive in the operation. A defer response must contain `data` and `path` -entries and may contain `label`, `errors`, and `extensions` entries. +A defer payload is a map that may appear as an item in the `incremental` entry +of a response. A defer payload is the result of an associated `@defer` directive +in the operation. A defer payload must contain `data` and `path` entries and may +contain `label`, `errors`, and `extensions` entries. ##### Data -The `data` entry in a Defer response will be of the type of a particular field -in the GraphQL result. The adjacent `path` field will contain the path segments -of the field this data is associated with. If `data` is set to `null`, it -indicates that an error has caused a `null` to bubble up to a field higher than -the field that contains the fragment with the associated `@defer` directive. +The `data` entry in a Defer payload will be of the type of a particular field in +the GraphQL result. The adjacent `path` field will contain the path segments of +the field this data is associated with. If `data` is set to `null`, it indicates +that an error has caused a `null` to bubble up to a field higher than the field +that contains the fragment with the associated `@defer` directive. #### Path @@ -310,7 +310,7 @@ indices should be 0-indexed integers. If the path is associated to an aliased field, the path should use the aliased name, since it represents a path in the response, not in the request. -When the `path` field is present on a Stream response, it indicates that the +When the `path` field is present on a Stream payload, it indicates that the `items` field represents the partial result of the list field containing the corresponding `@stream` directive. All but the non-final path segments must refer to the location of the list field containing the corresponding `@stream` @@ -319,7 +319,7 @@ integer indicates that this result is set at a range, where the beginning of the range is at the index of this integer, and the length of the range is the length of the data. -When the `path` field is present on a Defer response, it indicates that the +When the `path` field is present on a Defer payload, it indicates that the `data` field represents the result of the fragment containing the corresponding `@defer` directive. The path segments must point to the location of the result of the field containing the associated `@defer` directive. @@ -329,7 +329,7 @@ field which experienced the error. #### Label -Stream and Defer responses may contain a string field `label`. This `label` is +Stream and Defer payloads may contain a string field `label`. This `label` is the same label passed to the `@defer` or `@stream` directive associated with the response. This allows clients to identify which `@defer` or `@stream` directive is associated with this value. `label` will not be present if the corresponding From f1c0ec2557388bb7dee4a90552e18ca0fb8a4b09 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 8 Sep 2022 11:12:12 -0400 Subject: [PATCH 38/65] event stream => response stream --- spec/Section 7 -- Response.md | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index b8db8364e..bc656d353 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -10,7 +10,7 @@ the case that any _field error_ was raised on a field and was replaced with ## Response Format -A response to a GraphQL request must be a map or an event stream of maps. +A response to a GraphQL request must be a map or a response stream of maps. If the request raised any errors, the response map must contain an entry with key `errors`. The value of this entry is described in the "Errors" section. If @@ -22,33 +22,33 @@ key `data`. The value of this entry is described in the "Data" section. If the request failed before execution, due to a syntax error, missing information, or validation error, this entry must not be present. -When the response of the GraphQL operation is an event stream, the first value +When the response of the GraphQL operation is a response stream, the first value will be the initial response. All subsequent values may contain an `incremental` entry, containing a list of Defer or Stream payloads. The `label` and `path` entries on Defer and Stream payloads are used by clients to identify the `@defer` or `@stream` directive from the GraphQL operation that triggered this response to be included in an `incremental` entry on a value -returned by the event stream. When a label is provided, the combination of these -two entries will be unique across all Defer and Stream payloads returned in the -event stream. +returned by the response stream. When a label is provided, the combination of +these two entries will be unique across all Defer and Stream payloads returned +in the response stream. -If the response of the GraphQL operation is an event stream, each response map +If the response of the GraphQL operation is a response stream, each response map must contain an entry with key `hasNext`. The value of this entry is `true` for all but the last response in the stream. The value of this entry is `false` for the last response of the stream. This entry is not required for GraphQL operations that return a single response map. -The GraphQL server may determine there are no more values in the event stream +The GraphQL server may determine there are no more values in the response stream after a previous value with `hasNext` equal to `true` has been emitted. In this -case the last value in the event stream should be a map without `data`, `label`, -and `path` entries, and a `hasNext` entry with a value of `false`. +case the last value in the response stream should be a map without `data`, +`label`, and `path` entries, and a `hasNext` entry with a value of `false`. The response map may also contain an entry with key `extensions`. This entry, if set, must have a map as its value. This entry is reserved for implementors to extend the protocol however they see fit, and hence there are no additional -restrictions on its contents. When the response of the GraphQL operation is an -event stream, implementors may send subsequent payloads containing only +restrictions on its contents. When the response of the GraphQL operation is a +response stream, implementors may send subsequent response maps containing only `hasNext` and `extensions` entries. Defer and Stream payloads may also contain an entry with the key `extensions`, also reserved for implementors to extend the protocol however they see fit. @@ -267,7 +267,7 @@ discouraged. ### Incremental The `incremental` entry in the response is a non-empty list of Defer or Stream -payloads. If the response of the GraphQL operation is an event stream, this +payloads. If the response of the GraphQL operation is a response stream, this field may appear on both the initial and subsequent values. #### Stream payload From 3830406c2ad2433c4af987718050794ff68a4ee4 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 8 Sep 2022 11:14:27 -0400 Subject: [PATCH 39/65] link to path section --- spec/Section 7 -- Response.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index bc656d353..0be0b5e03 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -134,7 +134,7 @@ If an error can be associated to a particular field in the GraphQL result, it must contain an entry with the key `path` that details the path of the response field which experienced the error. This allows clients to identify whether a `null` result is intentional or caused by a runtime error. The value of this -field is described in the "Path" section. +field is described in the [Path](#sec-Path) section. For example, if fetching one of the friends' names fails in the following operation: From f950efbf75dd420e59aca2b9f38cbf73fc18e54f Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 8 Sep 2022 11:15:16 -0400 Subject: [PATCH 40/65] use case no dash --- spec/Section 3 -- Type System.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/spec/Section 3 -- Type System.md b/spec/Section 3 -- Type System.md index 3cc36ee3a..ee4cdb758 100644 --- a/spec/Section 3 -- Type System.md +++ b/spec/Section 3 -- Type System.md @@ -2209,8 +2209,8 @@ Note: The ability to defer and/or stream parts of a response can have a potentially significant impact on application performance. Developers generally need clear, predictable control over their application's performance. It is highly recommended that GraphQL servers honor the `@defer` and `@stream` -directives on each execution. However, the specification allows advanced -use-cases where the server can determine that it is more performant to not defer +directives on each execution. However, the specification allows advanced use +cases where the server can determine that it is more performant to not defer and/or stream. Therefore, GraphQL clients _must_ be able to process a response that ignores the `@defer` and/or `@stream` directives. This also applies to the `initialCount` argument on the `@stream` directive. Clients _must_ be able to From ad5b2e23331797cb53a333a6c93ebb650f3ea081 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 8 Sep 2022 11:15:56 -0400 Subject: [PATCH 41/65] remove "or null" --- spec/Section 3 -- Type System.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/spec/Section 3 -- Type System.md b/spec/Section 3 -- Type System.md index ee4cdb758..087a740bc 100644 --- a/spec/Section 3 -- Type System.md +++ b/spec/Section 3 -- Type System.md @@ -2159,7 +2159,7 @@ fragment someFragment on User { - `if: Boolean! = true` - When `true`, fragment _should_ be deferred. When `false`, fragment will not be deferred and data will be included in the - initial response. Defaults to `true` when omitted or null. + initial response. Defaults to `true` when omitted. - `label: String` - May be used by GraphQL clients to identify the data from responses and associate it with the corresponding defer directive. If provided, the GraphQL Server must add it to the corresponding payload. `label` @@ -2195,7 +2195,7 @@ query myQuery($shouldStream: Boolean) { - `if: Boolean! = true` - When `true`, field _should_ be streamed. When `false`, the field will not be streamed and all list items will be included in the - initial response. Defaults to `true` when omitted or null. + initial response. Defaults to `true` when omitted. - `label: String` - May be used by GraphQL clients to identify the data from responses and associate it with the corresponding stream directive. If provided, the GraphQL Server must add it to the corresponding payload. `label` From c1f3f65a6101f5565bc46287f0a471d6ccdb39d2 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 8 Sep 2022 11:31:04 -0400 Subject: [PATCH 42/65] add detailed incremental example --- spec/Section 7 -- Response.md | 80 +++++++++++++++++++++++++++++++++++ 1 file changed, 80 insertions(+) diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index 0be0b5e03..477c964ca 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -270,6 +270,86 @@ The `incremental` entry in the response is a non-empty list of Defer or Stream payloads. If the response of the GraphQL operation is a response stream, this field may appear on both the initial and subsequent values. +For example, a query containing both defer and stream: + +```graphql example +query { + person(id: "cGVvcGxlOjE=") { + ...HomeWorldFragment @defer(label: "homeWorldDefer") + name + films @stream(initialCount: 1, label: "filmsStream") { + title + } + } +} +fragment HomeWorldFragment on Person { + homeWorld { + name + } +} +``` + +The response stream might look like: + +Response 1, the initial response does not contain any deferred or streamed +results. + +```json example +{ + "data": { + "person": { + "name": "Luke Skywalker", + "films": [{ "title": "A New Hope" }] + } + }, + "hasNext": true +} +``` + +Response 2, contains the defer payload and the first stream payload. + +```json example +{ + "incremental": [ + { + "label": "homeWorldDefer", + "path": ["person"], + "data": { "homeWorld": { "name": "Tatooine" } } + }, + { + "label": "filmsStream", + "path": ["person", "films", 1], + "items": [{ "title": "The Empire Strikes Back" }] + } + ], + "hasNext": true +} +``` + +Response 3, contains an additional stream payload. + +```json example +{ + "incremental": [ + { + "label": "filmsStream", + "path": ["person", "films", 2], + "items": [{ "title": "Return of the Jedi" }] + } + ], + "hasNext": true +} +``` + +Response 4, contains no incremental payloads, {hasNext} set to {false} indicates +the end of the stream. + +```json example +{ + "hasNext": false +} +``` + #### Stream payload A stream payload is a map that may appear as an item in the `incremental` entry From 2e417490c591100d1097a0217ca6dc706130bac8 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 8 Sep 2022 12:22:32 -0400 Subject: [PATCH 43/65] update label validation rule --- spec/Section 5 -- Validation.md | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/spec/Section 5 -- Validation.md b/spec/Section 5 -- Validation.md index 018d7b91a..78a910cbf 100644 --- a/spec/Section 5 -- Validation.md +++ b/spec/Section 5 -- Validation.md @@ -1560,17 +1560,17 @@ mutation { ** Formal Specification ** -- For every {directive} in a document. -- Let {directiveName} be the name of {directive}. -- If {directiveName} is "defer" or "stream": - - For every {argument} in {directive}: - - Let {argumentName} be the name of {argument}. - - Let {argumentValue} be the value passed to {argument}. - - If {argumentName} is "label": - - {argumentValue} must not be a variable. - - Let {labels} be all label values passed to defer or stream directive label - arguments. - - {labels} must be a set of one. +- Let {labelValues} be an empty set. +- For every {directive} in the document: + - Let {directiveName} be the name of {directive}. + - If {directiveName} is "defer" or "stream": + - For every {argument} in {directive}: + - Let {argumentName} be the name of {argument}. + - Let {argumentValue} be the value passed to {argument}. + - If {argumentName} is "label": + - {argumentValue} must not be a variable. + - {argumentValue} must not be present in {labelValues}. + - Append {argumentValue} to {labelValues}. **Explanatory Text** From abb14a0d078f1ccbb42a67e6df6779f01ff3874d Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 8 Sep 2022 17:27:57 -0400 Subject: [PATCH 44/65] clarify hasNext on incremental example --- spec/Section 7 -- Response.md | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index 477c964ca..5cf6fc567 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -41,8 +41,8 @@ operations that return a single response map. The GraphQL server may determine there are no more values in the response stream after a previous value with `hasNext` equal to `true` has been emitted. In this -case the last value in the response stream should be a map without `data`, -`label`, and `path` entries, and a `hasNext` entry with a value of `false`. +case the last value in the response stream should be a map without `data` and +`incremental` entries, and a `hasNext` entry with a value of `false`. The response map may also contain an entry with key `extensions`. This entry, if set, must have a map as its value. This entry is reserved for implementors to @@ -326,7 +326,10 @@ Response 2, contains the defer payload and the first stream payload. } ``` -Response 3, contains an additional stream payload. +Response 3, contains the final stream payload. In this example, the underlying +iterator does not close synchronously so {hasNext} is set to {true}. If this +iterator did close synchronously, {hasNext} would be set to {true} and this +would be the final response. ```json example { @@ -341,8 +344,9 @@ Response 3, contains an additional stream payload. } ``` -Response 4, contains no incremental payloads, {hasNext} set to {false} indicates -the end of the stream. +Response 4, contains no incremental payloads. {hasNext} set to {false} indicates +the end of the response stream. This response is sent when the underlying +iterator of the `films` field closes. ```json example { From 4ea2a34272be153d629bc6015aba6aaa8c30ed95 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 8 Sep 2022 17:45:10 -0400 Subject: [PATCH 45/65] clarify canceling of subsequent payloads --- spec/Section 6 -- Execution.md | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 491f65ce0..265401206 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -456,11 +456,16 @@ If during {ExecuteSelectionSet()} a field with a non-null {fieldType} raises a _field error_ then that error must propagate to this entire selection set, either resolving to {null} if allowed or further propagated to a parent field. -If this occurs, any defer or stream executions with a path that starts with the -same path as the resolved {null} must not return their results to the client. -These defer or stream executions or any sibling fields which have not yet -executed or have not yet yielded a value may be cancelled to avoid unnecessary -work. +If this occurs, any sibling fields which have not yet executed or have not yet +yielded a value may be cancelled to avoid unnecessary work. + +Additionally, the path of each {asyncRecord} in {subsequentPayloads} must be +compared with the path of the field that ultimately resolved to {null}. If the +path of any {asyncRecord} starts with, but is not equal to, the path of the +resolved {null}, the {asyncRecord} must be removed from {subsequentPayloads} and +its result must not be sent to clients. If these async records have not yet +executed or have not yet yielded a value they may also be cancelled to avoid +unnecessary work. Note: See [Handling Field Errors](#sec-Handling-Field-Errors) for more about this behavior. From 156549103b9ed9096613d38be0f33c24a234b0ae Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 8 Sep 2022 18:20:12 -0400 Subject: [PATCH 46/65] Add examples for non-null cases --- spec/Section 6 -- Execution.md | 149 ++++++++++++++++++++++++++++++--- 1 file changed, 139 insertions(+), 10 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 265401206..032085938 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -461,11 +461,35 @@ yielded a value may be cancelled to avoid unnecessary work. Additionally, the path of each {asyncRecord} in {subsequentPayloads} must be compared with the path of the field that ultimately resolved to {null}. If the -path of any {asyncRecord} starts with, but is not equal to, the path of the -resolved {null}, the {asyncRecord} must be removed from {subsequentPayloads} and -its result must not be sent to clients. If these async records have not yet -executed or have not yet yielded a value they may also be cancelled to avoid -unnecessary work. +path of any {asyncRecord} starts with the path of the resolved {null}, the +{asyncRecord} must be removed from {subsequentPayloads} and its result must not +be sent to clients. If these async records have not yet executed or have not yet +yielded a value they may also be cancelled to avoid unnecessary work. + +For example, assume the field `alwaysThrows` is a `Non-Null` type that always +raises a field error: + +```graphql example +{ + myObject { + ... @defer { + name + } + alwaysThrows + } +} +``` + +In this case, only one response should be sent. The async payload record +associated with the `@defer` directive should be removed and it's execution may +be cancelled. + +```json example +{ + "data": { "myObject": null }, + "hasNext": false +} +``` Note: See [Handling Field Errors](#sec-Handling-Field-Errors) for more about this behavior. @@ -757,7 +781,7 @@ variableValues, parentRecord, subsequentPayloads): - If {errors} is not empty: - Add an entry to {payload} named `errors` with the value {errors}. - If a field error was raised, causing a {null} to be propagated to - {responseValue}, and {objectType} is a Non-Nullable type: + {responseValue}: - Add an entry to {payload} named `data` with the value {null}. - Otherwise: - Add an entry to {payload} named `data` with the value {resultMap}. @@ -1121,16 +1145,121 @@ When a field error is raised inside `ExecuteDeferredFragment` or That is, the null resulting from a `Non-Null` type cannot propagate outside of the boundary of the defer or stream payload. -If a fragment with the `defer` directive is spread on a Non-Nullable object -type, and a field error has caused a {null} to propagate to the associated -object, the {null} should not propagate any further, and the associated Defer -Payload's `data` field must be set to {null}. +If a field error is raised while executing the selection set of a fragment with +the `defer` directive, causing a {null} to propagate to the object containing +this fragment, the {null} should not propagate any further. In this case, the +associated Defer Payload's `data` field must be set to {null}. + +For example, assume the `month` field is a `Non-Null` type that raises a field +error: + +```graphql example +{ + birthday { + ... @defer { + month + } + } +} +``` + +Response 1, the initial response is sent: + +```json example +{ + "data": { "birthday": {} }, + "hasNext": true +} +``` + +Response 2, the defer payload is sent. The {data} entry has been set to {null}, +as this {null} as propagated as high as the error boundary will allow. + +```json example +{ + "incremental": [ + { + "path": ["birthday"], + "data": null + } + ], + "hasNext": false +} +``` If the `stream` directive is present on a list field with a Non-Nullable inner type, and a field error has caused a {null} to propagate to the list item, the {null} should not propagate any further, and the associated Stream Payload's `item` field must be set to {null}. +For example, assume the `films` field is a `List` type with an `Non-Null` inner +type. In this case, the second list item raises a field error: + +```graphql example +{ + films @stream(initialCount: 1) +} +``` + +Response 1, the initial response is sent: + +```json example +{ + "data": { "films": ["A New Hope"] }, + "hasNext": true +} +``` + +Response 2, the first stream payload is sent. The {items} entry has been set to +{null}, as this {null} as propagated as high as the error boundary will allow. + +```json example +{ + "incremental": [ + { + "path": ["films", 1], + "items": null + } + ], + "hasNext": false +} +``` + +In this alternative example, assume the `films` field is a `List` type without a +`Non-Null` inner type. In this case, the second list item also raises a field +error: + +```graphql example +{ + films @stream(initialCount: 1) +} +``` + +Response 1, the initial response is sent: + +```json example +{ + "data": { "films": ["A New Hope"] }, + "hasNext": true +} +``` + +Response 2, the first stream payload is sent. The {items} entry has been set to +a list containing {null}, as this {null} has only propagated as high as the list +item. + +```json example +{ + "incremental": [ + { + "path": ["films", 1], + "items": [null] + } + ], + "hasNext": false +} +``` + If all fields from the root of the request to the source of the field error return `Non-Null` types, then the {"data"} entry in the response should be {null}. From a938f44fc4616f54755981d1050105e18f92f4bc Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Fri, 9 Sep 2022 06:58:54 -0400 Subject: [PATCH 47/65] typo --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 032085938..d1299a19f 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -481,7 +481,7 @@ raises a field error: ``` In this case, only one response should be sent. The async payload record -associated with the `@defer` directive should be removed and it's execution may +associated with the `@defer` directive should be removed and its execution may be cancelled. ```json example From a301f21e3c616229d06d93a380ad661e5ee4bfdb Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Fri, 9 Sep 2022 06:59:06 -0400 Subject: [PATCH 48/65] improve non-null example --- spec/Section 6 -- Execution.md | 27 ++++++++++++++++++++++++--- 1 file changed, 24 insertions(+), 3 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index d1299a19f..2ebe07abb 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -1156,9 +1156,12 @@ error: ```graphql example { birthday { - ... @defer { + ... @defer(label: "monthDefer") { month } + ... @defer(label: "yearDefer") { + year + } } } ``` @@ -1172,14 +1175,16 @@ Response 1, the initial response is sent: } ``` -Response 2, the defer payload is sent. The {data} entry has been set to {null}, -as this {null} as propagated as high as the error boundary will allow. +Response 2, the defer payload for label "monthDefer" is sent. The {data} entry +has been set to {null}, as this {null} as propagated as high as the error +boundary will allow. ```json example { "incremental": [ { "path": ["birthday"], + "label": "monthDefer", "data": null } ], @@ -1187,6 +1192,22 @@ as this {null} as propagated as high as the error boundary will allow. } ``` +Response 3, the defer payload for label "yearDefer" is sent. The data in this +payload is unaffected by the previous null error. + +```json example +{ + "incremental": [ + { + "path": ["birthday"], + "label": "yearDefer", + "data": { "year": "2022" } + } + ], + "hasNext": false +} +``` + If the `stream` directive is present on a list field with a Non-Nullable inner type, and a field error has caused a {null} to propagate to the list item, the {null} should not propagate any further, and the associated Stream Payload's From 38bfbb9bb204073c992631956248c4cbc04eb9e6 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Fri, 9 Sep 2022 07:26:51 -0400 Subject: [PATCH 49/65] Add FilterSubsequentPayloads algorithm --- spec/Section 6 -- Execution.md | 48 +++++++++++++++++++++++++++------- 1 file changed, 39 insertions(+), 9 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 2ebe07abb..f3c2b055e 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -459,12 +459,45 @@ either resolving to {null} if allowed or further propagated to a parent field. If this occurs, any sibling fields which have not yet executed or have not yet yielded a value may be cancelled to avoid unnecessary work. -Additionally, the path of each {asyncRecord} in {subsequentPayloads} must be -compared with the path of the field that ultimately resolved to {null}. If the -path of any {asyncRecord} starts with the path of the resolved {null}, the -{asyncRecord} must be removed from {subsequentPayloads} and its result must not -be sent to clients. If these async records have not yet executed or have not yet -yielded a value they may also be cancelled to avoid unnecessary work. +Additionally, async payload records in {subsequentPayloads} must be filtered if +their path points to a location that has resolved to {null} due to propagation +of a field error. This is described in +[Filter Subsequent Payloads](#sec-Filter-Subsequent-Payloads). These async +payload records must be removed from {subsequentPayloads} and their result must +not be sent to clients. If these async records have not yet executed or have not +yet yielded a value they may also be cancelled to avoid unnecessary work. + +Note: See [Handling Field Errors](#sec-Handling-Field-Errors) for more about +this behavior. + +### Filter Subsequent Payloads + +When a field error is raised, there may be async payload records in +{subsequentPayloads} with a path that points to a location that has been removed +or set to null due to null propagation. These async payload records must be +removed from subsequent payloads and their results must not be sent to clients. + +In {FilterSubsequentPayloads}, {nullPath} is the path which has resolved to null +after propagation as a result of a field error. {currentAsyncRecord} is the +async payload record where the field error was raised. {currentAsyncRecord} will +not be set for field errors that were raised during the initial execution +outside of {ExecuteDeferredFragment} or {ExecuteStreamField}. + +FilterSubsequentPayloads(subsequentPayloads, nullPath, currentAsyncRecord): + +- For each {asyncRecord} in {subsequentPayloads}: + - If {asyncRecord} is the same record as {currentAsyncRecord}: + - Continue to the next record in {subsequentPayloads}. + - Initialize {index} to zero. + - While {index} is less then the length of {nullPath}: + - Initialize {nullPathItem} to the element at {index} in {nullPath}. + - Initialize {asyncRecordPathItem} to the element at {index} in the {path} + of {asyncRecord}. + - If {nullPathItem} is not equivalent to {asyncRecordPathItem}: + - Continue to the next record in {subsequentPayloads}. + - Increment {index} by one. + - Remove {asyncRecord} from {subsequentPayloads}. Optionally, cancel any + incomplete work in the execution of {asyncRecord}. For example, assume the field `alwaysThrows` is a `Non-Null` type that always raises a field error: @@ -491,9 +524,6 @@ be cancelled. } ``` -Note: See [Handling Field Errors](#sec-Handling-Field-Errors) for more about -this behavior. - ### Normal and Serial Execution Normally the executor can execute the entries in a grouped field set in whatever From 8d07deec8f0af99a31d6a7998936df424ef169c0 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Wed, 12 Oct 2022 17:22:54 -0400 Subject: [PATCH 50/65] link to note on should --- spec/Section 3 -- Type System.md | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/spec/Section 3 -- Type System.md b/spec/Section 3 -- Type System.md index 087a740bc..fc1d51e69 100644 --- a/spec/Section 3 -- Type System.md +++ b/spec/Section 3 -- Type System.md @@ -2157,9 +2157,10 @@ fragment someFragment on User { #### @defer Arguments -- `if: Boolean! = true` - When `true`, fragment _should_ be deferred. When - `false`, fragment will not be deferred and data will be included in the - initial response. Defaults to `true` when omitted. +- `if: Boolean! = true` - When `true`, fragment _should_ be deferred (See + [related note](#note-088b7)). When `false`, fragment will not be deferred and + data will be included in the initial response. Defaults to `true` when + omitted. - `label: String` - May be used by GraphQL clients to identify the data from responses and associate it with the corresponding defer directive. If provided, the GraphQL Server must add it to the corresponding payload. `label` @@ -2193,9 +2194,10 @@ query myQuery($shouldStream: Boolean) { #### @stream Arguments -- `if: Boolean! = true` - When `true`, field _should_ be streamed. When `false`, - the field will not be streamed and all list items will be included in the - initial response. Defaults to `true` when omitted. +- `if: Boolean! = true` - When `true`, field _should_ be streamed (See + [related note](#note-088b7)). When `false`, the field will not be streamed and + all list items will be included in the initial response. Defaults to `true` + when omitted. - `label: String` - May be used by GraphQL clients to identify the data from responses and associate it with the corresponding stream directive. If provided, the GraphQL Server must add it to the corresponding payload. `label` From 008818dfa93d1deb93c932ec3b6f05e9da96d2d6 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Tue, 1 Nov 2022 15:35:52 -0400 Subject: [PATCH 51/65] update on hasNext --- spec/Section 7 -- Response.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index 5cf6fc567..e00f5a254 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -36,7 +36,7 @@ in the response stream. If the response of the GraphQL operation is a response stream, each response map must contain an entry with key `hasNext`. The value of this entry is `true` for all but the last response in the stream. The value of this entry is `false` for -the last response of the stream. This entry is not required for GraphQL +the last response of the stream. This entry must not be present for GraphQL operations that return a single response map. The GraphQL server may determine there are no more values in the response stream From 4adb05a0a685f26943d1df98c2d5245a0601d95b Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Mon, 7 Nov 2022 16:07:31 -0500 Subject: [PATCH 52/65] small fixes (#3) * add comma * remove unused parameter --- spec/Section 6 -- Execution.md | 31 +++++++++++++------------------ 1 file changed, 13 insertions(+), 18 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index f3c2b055e..c936e1802 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -429,8 +429,7 @@ subsequentPayloads, asyncRecord): - If {path} is not provided, initialize it to an empty list. - If {subsequentPayloads} is not provided, initialize it to the empty set. - Let {groupedFieldSet} and {deferredGroupedFieldsList} be the result of - {CollectFields(objectType, objectValue, selectionSet, variableValues, path, - asyncRecord)}. + {CollectFields(objectType, selectionSet, variableValues, path, asyncRecord)}. - Initialize {resultMap} to an empty ordered map. - For each {groupedFieldSet} as {responseKey} and {fields}: - Let {fieldName} be the name of the first entry in {fields}. Note: This value @@ -662,8 +661,8 @@ The depth-first-search order of the field groups produced by {CollectFields()} is maintained through execution, ensuring that fields appear in the executed response in a stable and predictable order. -CollectFields(objectType, objectValue, selectionSet, variableValues, path, -asyncRecord, visitedFragments, deferredGroupedFieldsList): +CollectFields(objectType, selectionSet, variableValues, path, asyncRecord, +visitedFragments, deferredGroupedFieldsList): - If {visitedFragments} is not provided, initialize it to the empty set. - Initialize {groupedFields} to an empty ordered map of lists. @@ -708,16 +707,14 @@ asyncRecord, visitedFragments, deferredGroupedFieldsList): - Let {label} be the value or the variable to {deferDirective}'s {label} argument. - Let {deferredGroupedFields} be the result of calling - {CollectFields(objectType, objectValue, fragmentSelectionSet, - variableValues, path, asyncRecord, visitedFragments, - deferredGroupedFieldsList)}. + {CollectFields(objectType, fragmentSelectionSet, variableValues, path, + asyncRecord, visitedFragments, deferredGroupedFieldsList)}. - Append a record containing {label} and {deferredGroupedFields} to {deferredGroupedFieldsList}. - Continue with the next {selection} in {selectionSet}. - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, objectValue, fragmentSelectionSet, - variableValues, path, asyncRecord, visitedFragments, - deferredGroupedFieldsList)}. + {CollectFields(objectType, fragmentSelectionSet, variableValues, path, + asyncRecord, visitedFragments, deferredGroupedFieldsList)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. @@ -738,16 +735,14 @@ asyncRecord, visitedFragments, deferredGroupedFieldsList): - Let {label} be the value or the variable to {deferDirective}'s {label} argument. - Let {deferredGroupedFields} be the result of calling - {CollectFields(objectType, objectValue, fragmentSelectionSet, - variableValues, path, asyncRecord, visitedFragments, - deferredGroupedFieldsList)}. + {CollectFields(objectType, fragmentSelectionSet, variableValues, path, + asyncRecord, visitedFragments, deferredGroupedFieldsList)}. - Append a record containing {label} and {deferredGroupedFields} to {deferredGroupedFieldsList}. - Continue with the next {selection} in {selectionSet}. - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, objectValue, fragmentSelectionSet, - variableValues, path, asyncRecord, visitedFragments, - deferredGroupedFieldsList)}. + {CollectFields(objectType, fragmentSelectionSet, variableValues, path, + asyncRecord, visitedFragments, deferredGroupedFieldsList)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. @@ -965,8 +960,8 @@ directive. #### Execute Stream Field -ExecuteStreamField(label, iterator, index, fields, innerType, path streamRecord, -variableValues, subsequentPayloads): +ExecuteStreamField(label, iterator, index, fields, innerType, path, +streamRecord, variableValues, subsequentPayloads): - Let {streamRecord} be an async payload record created from {label}, {path}, and {iterator}. From ddd0fd7c060ba2558af30adef2eb526686ad1dec Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 16 Nov 2022 17:15:20 +0200 Subject: [PATCH 53/65] remove ResolveFIeldGenerator (#4) * streamline stream execution Currently, these spec changes introduce a new internal function named `ResolveFieldGenerator` that is suggested parallels `ResolveFieldValue`. This function is used during field execution such that if the stream directive is specified, it is called instead of `ResolveFieldValue`. The reference implementation, however, does not require any such function, simply utilizing the result of `ResolveFieldValue`. With incremental delivery, collections completed by `CompleteValue` should be explicitly iterated using a well-defined iterator, such that the iterator can be passed to `ExecuteStreamField`. But this does not require a new internal function to be specified/exposed. Moreover, introducing this function causes a mixing of concerns between the `ExecuteField` and `CompleteValue` algorithms; Currently, if stream is specified for a field, `ExecuteField` extracts the iterator and passes it to `CompleteValue`, while if stream is not specified, the `ExecuteField` passes the collection, i.e. the iterable, not the iterator. In the stream case, this shunts some of the logic checking the validity of resolution results into field execution. In fact, it exposes a specification "bug" => in the stream case, no checking is actually done that `ResolveFieldGenerator` returns an iterator! This change removes `ResolveFieldGenerator` and with it some complexity, and brings it in line with the reference implementation. The reference implementation contains some simplification of the algorithm for the synchronous iterator case (we don't have to preserve the iterator on the StreamRecord, because there will be no early close required and we don't have to set isCompletedIterator, beacuse we don't have to create a dummy payload for termination of the asynchronous stream), We could consider also removing these bits as well, as they are an implementation detail in terms of how our dispatcher is managing its iterators, but that should be left for another change. * run prettier --- spec/Section 6 -- Execution.md | 117 +++++++++++++-------------------- 1 file changed, 44 insertions(+), 73 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index c936e1802..a13859cc4 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -832,15 +832,6 @@ subsequentPayloads, asyncRecord): - Append {fieldName} to {path}. - Let {argumentValues} be the result of {CoerceArgumentValues(objectType, field, variableValues)} -- If {field} provides the directive `@stream`, let {streamDirective} be that - directive. - - Let {initialCount} be the value or variable provided to {streamDirective}'s - {initialCount} argument. - - Let {resolvedValue} be {ResolveFieldGenerator(objectType, objectValue, - fieldName, argumentValues)}. - - Let {result} be the result of calling {CompleteValue(fieldType, fields, - resolvedValue, variableValues, path, subsequentPayloads, asyncRecord)}. - - Return {result}. - Let {resolvedValue} be {ResolveFieldValue(objectType, objectValue, fieldName, argumentValues)}. - Let {result} be the result of calling {CompleteValue(fieldType, fields, @@ -908,20 +899,17 @@ must only allow usage of variables of appropriate types. While nearly all of GraphQL execution can be described generically, ultimately the internal system exposing the GraphQL interface must provide values. This is exposed via {ResolveFieldValue}, which produces a value for a given field on a -type for a real value. In addition, {ResolveFieldGenerator} will be exposed to -produce an iterator for a field with `List` return type. The internal system may -optionally define a generator function. In the case where the generator is not -defined, the GraphQL executor provides a default generator. For example, a -trivial generator that yields the entire list upon the first iteration. +type for a real value. -As an example, a {ResolveFieldValue} might accept the {objectType} `Person`, the -{field} {"soulMate"}, and the {objectValue} representing John Lennon. It would -be expected to yield the value representing Yoko Ono. +As an example, this might accept the {objectType} `Person`, the {field} +{"soulMate"}, and the {objectValue} representing John Lennon. It would be +expected to yield the value representing Yoko Ono. -A {ResolveFieldGenerator} might accept the {objectType} `MusicBand`, the {field} -{"members"}, and the {objectValue} representing Beatles. It would be expected to -yield a iterator of values representing John Lennon, Paul McCartney, Ringo Starr -and George Harrison. +List values are resolved similarly. For example, {ResolveFieldValue} might also +accept the {objectType} `MusicBand`, the {field} {"members"}, and the +{objectValue} representing the Beatles. It would be expected to yield a +collection of values representing John Lennon, Paul McCartney, Ringo Starr and +George Harrison. ResolveFieldValue(objectType, objectValue, fieldName, argumentValues): @@ -930,33 +918,23 @@ ResolveFieldValue(objectType, objectValue, fieldName, argumentValues): - Return the result of calling {resolver}, providing {objectValue} and {argumentValues}. -ResolveFieldGenerator(objectType, objectValue, fieldName, argumentValues): - -- If {objectType} provide an internal function {generatorResolver} for - generating partially resolved value of a list field named {fieldName}: - - Let {generatorResolver} be the internal function. - - Return the iterator from calling {generatorResolver}, providing - {objectValue} and {argumentValues}. -- Create {generator} from {ResolveFieldValue(objectType, objectValue, fieldName, - argumentValues)}. -- Return {generator}. - Note: It is common for {resolver} to be asynchronous due to relying on reading an underlying database or networked service to produce a value. This necessitates the rest of a GraphQL executor to handle an asynchronous execution -flow. In addition, a common implementation of {generator} is to leverage -asynchronous iterators or asynchronous generators provided by many programming -languages. +flow. In addition, an implementation for collections may leverage asynchronous +iterators or asynchronous generators provided by many programming languages. +This may be particularly helpful when used in conjunction with the `@stream` +directive. ### Value Completion After resolving the value for a field, it is completed by ensuring it adheres to the expected return type. If the return type is another Object type, then the -field execution process continues recursively. In the case where a value -returned for a list type field is an iterator due to `@stream` specified on the -field, value completion iterates over the iterator until the number of items -yield by the iterator satisfies `initialCount` specified on the `@stream` -directive. +field execution process continues recursively. If the return type is a List +type, each member of the resolved collection is completed using the same value +completion process. In the case where `@stream` is specified on a field of list +type, value completion iterates over the collection until the number of items +yielded items satisfies `initialCount` specified on the `@stream` directive. #### Execute Stream Field @@ -1008,45 +986,38 @@ subsequentPayloads, asyncRecord): - If {result} is {null} (or another internal value similar to {null} such as {undefined}), return {null}. - If {fieldType} is a List type: - - If {result} is an iterator: - - Let {field} be the first entry in {fields}. - - Let {innerType} be the inner type of {fieldType}. - - If {field} provides the directive `@stream` and its {if} argument is not - {false} and is not a variable in {variableValues} with the value {false} - and {innerType} is the outermost return type of the list type defined for - {field}: - - Let {streamDirective} be that directive. + - If {result} is not a collection of values, raise a _field error_. + - Let {field} be the first entry in {fields}. + - Let {innerType} be the inner type of {fieldType}. + - If {field} provides the directive `@stream` and its {if} argument is not + {false} and is not a variable in {variableValues} with the value {false} and + {innerType} is the outermost return type of the list type defined for + {field}: + - Let {streamDirective} be that directive. - Let {initialCount} be the value or variable provided to {streamDirective}'s {initialCount} argument. - If {initialCount} is less than zero, raise a _field error_. - Let {label} be the value or variable provided to {streamDirective}'s {label} argument. - - Let {initialItems} be an empty list - - Let {index} be zero. + - Let {iterator} be an iterator for {result}. + - Let {items} be an empty list. + - Let {index} be zero. - While {result} is not closed: - - If {streamDirective} is not defined or {index} is not greater than or - equal to {initialCount}: - - Wait for the next item from {result}. - - Let {resultItem} be the item retrieved from {result}. - - Let {itemPath} be {path} with {index} appended. - - Let {resolvedItem} be the result of calling {CompleteValue(innerType, - fields, resultItem, variableValues, itemPath, subsequentPayloads, - asyncRecord)}. - - Append {resolvedItem} to {initialItems}. - - Increment {index}. - - If {streamDirective} is defined and {index} is greater than or equal to - {initialCount}: - - Call {ExecuteStreamField(label, result, index, fields, innerType, - path, asyncRecord, subsequentPayloads)}. - - Let {result} be {initialItems}. - - Exit while loop. - - Return {initialItems}. - - If {result} is not a collection of values, raise a _field error_. - - Let {innerType} be the inner type of {fieldType}. - - Return a list where each list item is the result of calling - {CompleteValue(innerType, fields, resultItem, variableValues, itemPath, - subsequentPayloads, asyncRecord)}, where {resultItem} is each item in - {result} and {itemPath} is {path} with the index of the item appended. + - If {streamDirective} is defined and {index} is greater than or equal to + {initialCount}: + - Call {ExecuteStreamField(label, iterator, index, fields, innerType, + path, asyncRecord, subsequentPayloads)}. + - Return {items}. + - Otherwise: + - Retrieve the next item from {result} via the {iterator}. + - Let {resultItem} be the item retrieved from {result}. + - Let {itemPath} be {path} with {index} appended. + - Let {resolvedItem} be the result of calling {CompleteValue(innerType, + fields, resultItem, variableValues, itemPath, subsequentPayloads, + asyncRecord)}. + - Append {resolvedItem} to {initialItems}. + - Increment {index}. + - Return {items}. - If {fieldType} is a Scalar or Enum type: - Return the result of {CoerceResult(fieldType, result)}. - If {fieldType} is an Object, Interface, or Union type: From b54c9fe2bde0a670a2474256b5b409782220b58a Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 18 Nov 2022 16:31:34 +0200 Subject: [PATCH 54/65] fix typos (#6) * fix whitespace * complete renaming of initialItems --- spec/Section 6 -- Execution.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index a13859cc4..bd8706ef6 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -1002,7 +1002,7 @@ subsequentPayloads, asyncRecord): - Let {iterator} be an iterator for {result}. - Let {items} be an empty list. - Let {index} be zero. - - While {result} is not closed: + - While {result} is not closed: - If {streamDirective} is defined and {index} is greater than or equal to {initialCount}: - Call {ExecuteStreamField(label, iterator, index, fields, innerType, @@ -1015,9 +1015,9 @@ subsequentPayloads, asyncRecord): - Let {resolvedItem} be the result of calling {CompleteValue(innerType, fields, resultItem, variableValues, itemPath, subsequentPayloads, asyncRecord)}. - - Append {resolvedItem} to {initialItems}. + - Append {resolvedItem} to {items}. - Increment {index}. - - Return {items}. + - Return {items}. - If {fieldType} is a Scalar or Enum type: - Return the result of {CoerceResult(fieldType, result)}. - If {fieldType} is an Object, Interface, or Union type: From 02d46764c502add87ab8e5f0fdc602a8fd1a398b Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Mon, 21 Nov 2022 21:33:58 +0200 Subject: [PATCH 55/65] Add error handling for stream iterators (#5) * Add error handling for stream iterators * also add iterator error handling within CompleteValue * incorporate feedback --- spec/Section 6 -- Execution.md | 37 +++++++++++++++++++--------------- 1 file changed, 21 insertions(+), 16 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index bd8706ef6..5723175df 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -951,25 +951,29 @@ streamRecord, variableValues, subsequentPayloads): - Set {isCompletedIterator} to {true} on {streamRecord}. - Return {null}. - Let {payload} be an unordered map. - - Let {item} be the item retrieved from {iterator}. - - Let {data} be the result of calling {CompleteValue(innerType, fields, item, - variableValues, itemPath, subsequentPayloads, parentRecord)}. - - Append any encountered field errors to {errors}. - - Increment {index}. - - Call {ExecuteStreamField(label, iterator, index, fields, innerType, path, - streamRecord, variableValues, subsequentPayloads)}. - - If {parentRecord} is defined: - - Wait for the result of {dataExecution} on {parentRecord}. - - If {errors} is not empty: - - Add an entry to {payload} named `errors` with the value {errors}. - - If a field error was raised, causing a {null} to be propagated to {data}, - and {innerType} is a Non-Nullable type: + - If an item is not retrieved because of an error: + - Append the encountered error to {errors}. - Add an entry to {payload} named `items` with the value {null}. - Otherwise: - - Add an entry to {payload} named `items` with a list containing the value - {data}. + - Let {item} be the item retrieved from {iterator}. + - Let {data} be the result of calling {CompleteValue(innerType, fields, + item, variableValues, itemPath, subsequentPayloads, parentRecord)}. + - Append any encountered field errors to {errors}. + - Increment {index}. + - Call {ExecuteStreamField(label, iterator, index, fields, innerType, path, + streamRecord, variableValues, subsequentPayloads)}. + - If a field error was raised, causing a {null} to be propagated to {data}, + and {innerType} is a Non-Nullable type: + - Add an entry to {payload} named `items` with the value {null}. + - Otherwise: + - Add an entry to {payload} named `items` with a list containing the value + {data}. + - If {errors} is not empty: + - Add an entry to {payload} named `errors` with the value {errors}. - Add an entry to {payload} named `label` with the value {label}. - Add an entry to {payload} named `path` with the value {itemPath}. + - If {parentRecord} is defined: + - Wait for the result of {dataExecution} on {parentRecord}. - Return {payload}. - Set {dataExecution} on {streamRecord}. - Append {streamRecord} to {subsequentPayloads}. @@ -1009,7 +1013,8 @@ subsequentPayloads, asyncRecord): path, asyncRecord, subsequentPayloads)}. - Return {items}. - Otherwise: - - Retrieve the next item from {result} via the {iterator}. + - Wait for the next item from {result} via the {iterator}. + - If an item is not retrieved because of an error, raise a _field error_. - Let {resultItem} be the item retrieved from {result}. - Let {itemPath} be {path} with {index} appended. - Let {resolvedItem} be the result of calling {CompleteValue(innerType, From 3e7425060a8845f59483717f6e2b2170456cafc9 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Tue, 22 Nov 2022 13:25:12 -0500 Subject: [PATCH 56/65] Raise a field error if defer/stream encountered during subscription execution --- spec/Section 6 -- Execution.md | 24 ++++++++---------------- 1 file changed, 8 insertions(+), 16 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 5723175df..2191f9c2c 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -329,28 +329,15 @@ MapSourceToResponseEvent(sourceStream, subscription, schema, variableValues): ExecuteSubscriptionEvent(subscription, schema, variableValues, initialValue): -- Let {subsequentPayloads} be an empty list. - Let {subscriptionType} be the root Subscription type in {schema}. - Assert: {subscriptionType} is an Object type. - Let {selectionSet} be the top level Selection Set in {subscription}. - Let {data} be the result of running {ExecuteSelectionSet(selectionSet, - subscriptionType, initialValue, variableValues, subsequentPayloads)} - _normally_ (allowing parallelization). + subscriptionType, initialValue, variableValues)} _normally_ (allowing + parallelization). - Let {errors} be the list of all _field error_ raised while executing the selection set. -- If {subsequentPayloads} is empty: - - Return an unordered map containing {data} and {errors}. -- If {subsequentPayloads} is not empty: - - Let {initialResponse} be an unordered map containing {data}, {errors}, and - an entry named {hasNext} with the value {true}. - - Let {iterator} be the result of running - {YieldSubsequentPayloads(initialResponse, subsequentPayloads)}. - - For each {payload} yielded by {iterator}: - - If a termination signal is received: - - Send a termination signal to {iterator}. - - Return. - - Otherwise: - - Yield {payload}. +- Return an unordered map containing {data} and {errors}. Note: The {ExecuteSubscriptionEvent()} algorithm is intentionally similar to {ExecuteQuery()} since this is how each event result is produced. @@ -691,6 +678,8 @@ visitedFragments, deferredGroupedFieldsList): argument is not {false} and is not a variable in {variableValues} with the value {false}: - Let {deferDirective} be that directive. + - If this execution is for a subscription operation, raise a _field + error_. - If {deferDirective} is not defined: - If {fragmentSpreadName} is in {visitedFragments}, continue with the next {selection} in {selectionSet}. @@ -731,6 +720,8 @@ visitedFragments, deferredGroupedFieldsList): is not {false} and is not a variable in {variableValues} with the value {false}: - Let {deferDirective} be that directive. + - If this execution is for a subscription operation, raise a _field + error_. - If {deferDirective} is defined: - Let {label} be the value or the variable to {deferDirective}'s {label} argument. @@ -998,6 +989,7 @@ subsequentPayloads, asyncRecord): {innerType} is the outermost return type of the list type defined for {field}: - Let {streamDirective} be that directive. + - If this execution is for a subscription operation, raise a _field error_. - Let {initialCount} be the value or variable provided to {streamDirective}'s {initialCount} argument. - If {initialCount} is less than zero, raise a _field error_. From cb3ab461d95a4b3e7fabd88f8a800ad6d29bd594 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Tue, 22 Nov 2022 13:25:32 -0500 Subject: [PATCH 57/65] Add validation rule for defer/stream on subscriptions --- spec/Section 5 -- Validation.md | 49 +++++++++++++++++++++++++++++++++ 1 file changed, 49 insertions(+) diff --git a/spec/Section 5 -- Validation.md b/spec/Section 5 -- Validation.md index 78a910cbf..1e8fbd6d7 100644 --- a/spec/Section 5 -- Validation.md +++ b/spec/Section 5 -- Validation.md @@ -1556,6 +1556,55 @@ mutation { } ``` +### Defer And Stream Directives Are Used On Valid Operations + +** Formal Specification ** + +- Let {subscriptionFragments} be the empty set. +- For each {operation} in a document: + - If {operation} is a subscription operation: + - Let {fragments} be every fragment referenced by that {operation} + transitively. + - For each {fragment} in {fragments}: + - Let {fragmentName} be the name of {fragment}. + - Add {fragmentName} to {subscriptionFragments}. +- For every {directive} in a document: + - If {directiveName} is not "defer" or "stream": + - Continue to the next {directive}. + - Let {ancestor} be the ancestor operation or fragment definition of + {directive}. + - If {ancestor} is a fragment definition: + - If the fragment name of {ancestor} is not present in + {subscriptionFragments}: + - Continue to the next {directive}. + - If {ancestor} is not a subscription operation: + - Continue to the next {directive}. + - Let {if} be the argument named "if" on {directive}. + - {if} must be defined. + - Let {argumentValue} be the value passed to {if}. + - {argumentValue} must be a variable, or the boolean value "false". + +**Explanatory Text** + +The defer and stream directives can not be used to defer or stream data in +subscription operations. If these directives appear in a subscription operation +they must be disabled using the "if" argument. This rule will not permit any +defer or stream directives on a subscription operation that cannot be disabled +using the "if" argument. + +For example, the following document will not pass validation because `@defer` +has been used in a subscription operation with no "if" argument defined: + +```raw graphql counter-example +subscription sub { + newMessage { + ... @defer { + body + } + } +} +``` + ### Defer And Stream Directive Labels Are Unique ** Formal Specification ** From 24cf0729ed3008d7c88c2da6e1a33a355638a95e Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Wed, 23 Nov 2022 15:48:58 -0500 Subject: [PATCH 58/65] clarify label is not required --- spec/Section 6 -- Execution.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 2191f9c2c..96d08a3b3 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -801,7 +801,8 @@ variableValues, parentRecord, subsequentPayloads): - Add an entry to {payload} named `data` with the value {null}. - Otherwise: - Add an entry to {payload} named `data` with the value {resultMap}. - - Add an entry to {payload} named `label` with the value {label}. + - If {label} is defined: + - Add an entry to {payload} named `label` with the value {label}. - Add an entry to {payload} named `path` with the value {path}. - Return {payload}. - Set {dataExecution} on {deferredFragmentRecord}. @@ -961,7 +962,8 @@ streamRecord, variableValues, subsequentPayloads): {data}. - If {errors} is not empty: - Add an entry to {payload} named `errors` with the value {errors}. - - Add an entry to {payload} named `label` with the value {label}. + - If {label} is defined: + - Add an entry to {payload} named `label` with the value {label}. - Add an entry to {payload} named `path` with the value {itemPath}. - If {parentRecord} is defined: - Wait for the result of {dataExecution} on {parentRecord}. From d74430c9ccbd10198fbc658694fbc166aaf2d7fb Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 29 Nov 2022 15:50:09 +0200 Subject: [PATCH 59/65] fix parentRecord argument in ExecuteStreamField (#7) --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 96d08a3b3..5a889bc9b 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -931,7 +931,7 @@ yielded items satisfies `initialCount` specified on the `@stream` directive. #### Execute Stream Field ExecuteStreamField(label, iterator, index, fields, innerType, path, -streamRecord, variableValues, subsequentPayloads): +parentRecord, variableValues, subsequentPayloads): - Let {streamRecord} be an async payload record created from {label}, {path}, and {iterator}. From 79da7125f370e7f5805b034dd8ae49bfa2ff8633 Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Mon, 5 Dec 2022 14:03:10 -0500 Subject: [PATCH 60/65] fix typo Co-authored-by: Simon Gellis <82392336+simongellis-attentive@users.noreply.github.com> --- spec/Section 7 -- Response.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index e00f5a254..4eade1be1 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -328,7 +328,7 @@ Response 2, contains the defer payload and the first stream payload. Response 3, contains the final stream payload. In this example, the underlying iterator does not close synchronously so {hasNext} is set to {true}. If this -iterator did close synchronously, {hasNext} would be set to {true} and this +iterator did close synchronously, {hasNext} would be set to {false} and this would be the final response. ```json example From 8df13da44247c6b32e26a8476092a126a9abfa9f Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Sun, 15 Jan 2023 12:18:50 -0500 Subject: [PATCH 61/65] replace server with service --- spec/Section 3 -- Type System.md | 14 +++++++------- spec/Section 7 -- Response.md | 8 ++++---- 2 files changed, 11 insertions(+), 11 deletions(-) diff --git a/spec/Section 3 -- Type System.md b/spec/Section 3 -- Type System.md index fc1d51e69..cf0cd42b7 100644 --- a/spec/Section 3 -- Type System.md +++ b/spec/Section 3 -- Type System.md @@ -2163,8 +2163,8 @@ fragment someFragment on User { omitted. - `label: String` - May be used by GraphQL clients to identify the data from responses and associate it with the corresponding defer directive. If - provided, the GraphQL Server must add it to the corresponding payload. `label` - must be unique label across all `@defer` and `@stream` directives in a + provided, the GraphQL service must add it to the corresponding payload. + `label` must be unique label across all `@defer` and `@stream` directives in a document. `label` must not be provided as a variable. ### @stream @@ -2200,19 +2200,19 @@ query myQuery($shouldStream: Boolean) { when omitted. - `label: String` - May be used by GraphQL clients to identify the data from responses and associate it with the corresponding stream directive. If - provided, the GraphQL Server must add it to the corresponding payload. `label` - must be unique label across all `@defer` and `@stream` directives in a + provided, the GraphQL service must add it to the corresponding payload. + `label` must be unique label across all `@defer` and `@stream` directives in a document. `label` must not be provided as a variable. -- `initialCount: Int` - The number of list items the server should return as +- `initialCount: Int` - The number of list items the service should return as part of the initial response. If omitted, defaults to `0`. A field error will be raised if the value of this argument is less than `0`. Note: The ability to defer and/or stream parts of a response can have a potentially significant impact on application performance. Developers generally need clear, predictable control over their application's performance. It is -highly recommended that GraphQL servers honor the `@defer` and `@stream` +highly recommended that GraphQL services honor the `@defer` and `@stream` directives on each execution. However, the specification allows advanced use -cases where the server can determine that it is more performant to not defer +cases where the service can determine that it is more performant to not defer and/or stream. Therefore, GraphQL clients _must_ be able to process a response that ignores the `@defer` and/or `@stream` directives. This also applies to the `initialCount` argument on the `@stream` directive. Clients _must_ be able to diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index 4eade1be1..db50408fa 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -39,10 +39,10 @@ all but the last response in the stream. The value of this entry is `false` for the last response of the stream. This entry must not be present for GraphQL operations that return a single response map. -The GraphQL server may determine there are no more values in the response stream -after a previous value with `hasNext` equal to `true` has been emitted. In this -case the last value in the response stream should be a map without `data` and -`incremental` entries, and a `hasNext` entry with a value of `false`. +The GraphQL service may determine there are no more values in the response +stream after a previous value with `hasNext` equal to `true` has been emitted. +In this case the last value in the response stream should be a map without +`data` and `incremental` entries, and a `hasNext` entry with a value of `false`. The response map may also contain an entry with key `extensions`. This entry, if set, must have a map as its value. This entry is reserved for implementors to From 94363c9d5d8e53e91240ea3eabd32ff522f27a6b Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Mon, 16 Jan 2023 18:28:31 +0200 Subject: [PATCH 62/65] CollectFields does not require path or asyncRecord (#11) --- spec/Section 6 -- Execution.md | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 5a889bc9b..c84b86b5e 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -416,7 +416,7 @@ subsequentPayloads, asyncRecord): - If {path} is not provided, initialize it to an empty list. - If {subsequentPayloads} is not provided, initialize it to the empty set. - Let {groupedFieldSet} and {deferredGroupedFieldsList} be the result of - {CollectFields(objectType, selectionSet, variableValues, path, asyncRecord)}. + {CollectFields(objectType, selectionSet, variableValues)}. - Initialize {resultMap} to an empty ordered map. - For each {groupedFieldSet} as {responseKey} and {fields}: - Let {fieldName} be the name of the first entry in {fields}. Note: This value @@ -648,8 +648,8 @@ The depth-first-search order of the field groups produced by {CollectFields()} is maintained through execution, ensuring that fields appear in the executed response in a stable and predictable order. -CollectFields(objectType, selectionSet, variableValues, path, asyncRecord, -visitedFragments, deferredGroupedFieldsList): +CollectFields(objectType, selectionSet, variableValues, visitedFragments, +deferredGroupedFieldsList): - If {visitedFragments} is not provided, initialize it to the empty set. - Initialize {groupedFields} to an empty ordered map of lists. @@ -696,14 +696,14 @@ visitedFragments, deferredGroupedFieldsList): - Let {label} be the value or the variable to {deferDirective}'s {label} argument. - Let {deferredGroupedFields} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, path, - asyncRecord, visitedFragments, deferredGroupedFieldsList)}. + {CollectFields(objectType, fragmentSelectionSet, variableValues, + visitedFragments, deferredGroupedFieldsList)}. - Append a record containing {label} and {deferredGroupedFields} to {deferredGroupedFieldsList}. - Continue with the next {selection} in {selectionSet}. - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, path, - asyncRecord, visitedFragments, deferredGroupedFieldsList)}. + {CollectFields(objectType, fragmentSelectionSet, variableValues, + visitedFragments, deferredGroupedFieldsList)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. @@ -726,21 +726,21 @@ visitedFragments, deferredGroupedFieldsList): - Let {label} be the value or the variable to {deferDirective}'s {label} argument. - Let {deferredGroupedFields} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, path, - asyncRecord, visitedFragments, deferredGroupedFieldsList)}. + {CollectFields(objectType, fragmentSelectionSet, variableValues, + visitedFragments, deferredGroupedFieldsList)}. - Append a record containing {label} and {deferredGroupedFields} to {deferredGroupedFieldsList}. - Continue with the next {selection} in {selectionSet}. - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, path, - asyncRecord, visitedFragments, deferredGroupedFieldsList)}. + {CollectFields(objectType, fragmentSelectionSet, variableValues, + visitedFragments, deferredGroupedFieldsList)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. - Let {groupForResponseKey} be the list in {groupedFields} for {responseKey}; if no such list exists, create it as an empty list. - Append all items in {fragmentGroup} to {groupForResponseKey}. -- Return {groupedFields} and {deferredGroupedFieldsList}. +- Return {groupedFields}, {deferredGroupedFieldsList} and {visitedFragments}. Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` directives may be applied in either order since they apply commutatively. From fe9d8711973dcdd881311b355ea00cf072f22a53 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Sun, 21 May 2023 15:55:05 +0300 Subject: [PATCH 63/65] incremental delivery with deduplication, concurrent delivery, and early execution --- cspell.yml | 1 + spec/Section 5 -- Validation.md | 2 +- spec/Section 6 -- Execution.md | 1673 ++++++++++++++++++++++++------- spec/Section 7 -- Response.md | 61 +- 4 files changed, 1350 insertions(+), 387 deletions(-) diff --git a/cspell.yml b/cspell.yml index 8bc4a231c..4326d677a 100644 --- a/cspell.yml +++ b/cspell.yml @@ -12,6 +12,7 @@ words: - parallelization - structs - subselection + - errored # Fictional characters / examples - alderaan - hagrid diff --git a/spec/Section 5 -- Validation.md b/spec/Section 5 -- Validation.md index 1e8fbd6d7..d51c39ace 100644 --- a/spec/Section 5 -- Validation.md +++ b/spec/Section 5 -- Validation.md @@ -474,7 +474,7 @@ unambiguous. Therefore any two field selections which might both be encountered for the same object are only valid if they are equivalent. During execution, the simultaneous execution of fields with the same response -name is accomplished by {MergeSelectionSets()} and {CollectFields()}. +name is accomplished by {CollectFields()}. For simple hand-written GraphQL, this rule is obviously a clear developer error, however nested fragments can make this difficult to detect manually. diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index c84b86b5e..1370bcf0b 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -132,28 +132,11 @@ An initial value may be provided when executing a query operation. ExecuteQuery(query, schema, variableValues, initialValue): -- Let {subsequentPayloads} be an empty list. - Let {queryType} be the root Query type in {schema}. - Assert: {queryType} is an Object type. - Let {selectionSet} be the top level Selection Set in {query}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, - queryType, initialValue, variableValues, subsequentPayloads)} _normally_ - (allowing parallelization). -- Let {errors} be the list of all _field error_ raised while executing the - selection set. -- If {subsequentPayloads} is empty: - - Return an unordered map containing {data} and {errors}. -- If {subsequentPayloads} is not empty: - - Let {initialResponse} be an unordered map containing {data}, {errors}, and - an entry named {hasNext} with the value {true}. - - Let {iterator} be the result of running - {YieldSubsequentPayloads(initialResponse, subsequentPayloads)}. - - For each {payload} yielded by {iterator}: - - If a termination signal is received: - - Send a termination signal to {iterator}. - - Return. - - Otherwise: - - Yield {payload}. +- Return {ExecuteRootSelectionSet(variableValues, initialValue, queryType, + selectionSet)}. ### Mutation @@ -167,27 +150,11 @@ mutations ensures against race conditions during these side-effects. ExecuteMutation(mutation, schema, variableValues, initialValue): -- Let {subsequentPayloads} be an empty list. - Let {mutationType} be the root Mutation type in {schema}. - Assert: {mutationType} is an Object type. - Let {selectionSet} be the top level Selection Set in {mutation}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, - mutationType, initialValue, variableValues, subsequentPayloads)} _serially_. -- Let {errors} be the list of all _field error_ raised while executing the - selection set. -- If {subsequentPayloads} is empty: - - Return an unordered map containing {data} and {errors}. -- If {subsequentPayloads} is not empty: - - Let {initialResponse} be an unordered map containing {data}, {errors}, and - an entry named {hasNext} with the value {true}. - - Let {iterator} be the result of running - {YieldSubsequentPayloads(initialResponse, subsequentPayloads)}. - - For each {payload} yielded by {iterator}: - - If a termination signal is received: - - Send a termination signal to {iterator}. - - Return. - - Otherwise: - - Yield {payload}. +- Return {ExecuteRootSelectionSet(variableValues, initialValue, queryType, + selectionSet, true)}. ### Subscription @@ -286,15 +253,17 @@ CreateSourceEventStream(subscription, schema, variableValues, initialValue): - Let {subscriptionType} be the root Subscription type in {schema}. - Assert: {subscriptionType} is an Object type. - Let {selectionSet} be the top level Selection Set in {subscription}. -- Let {groupedFieldSet} be the result of {CollectFields(subscriptionType, - selectionSet, variableValues)}. +- Let {fieldsByTarget} be the result of calling + {AnalyzeSelectionSet(subscriptionType, selectionSet, variableValues)}. +- Let {groupedFieldSet} be the first entry in {fieldsByTarget}. - If {groupedFieldSet} does not have exactly one entry, raise a _request error_. -- Let {fields} be the value of the first entry in {groupedFieldSet}. -- Let {fieldName} be the name of the first entry in {fields}. Note: This value - is unaffected if an alias is used. -- Let {field} be the first entry in {fields}. +- Let {fieldGroup} be the value of the first entry in {groupedFieldSet}. +- Let {fieldDetails} be the first entry in {fieldGroup}. +- Let {node} be the corresponding entry on {fieldDetails}. +- Let {fieldName} be the name of {node}. Note: This value is unaffected if an + alias is used. - Let {argumentValues} be the result of {CoerceArgumentValues(subscriptionType, - field, variableValues)} + node, variableValues)} - Let {fieldStream} be the result of running {ResolveFieldEventStream(subscriptionType, initialValue, fieldName, argumentValues)}. @@ -321,10 +290,9 @@ MapSourceToResponseEvent(sourceStream, subscription, schema, variableValues): - Return a new event stream {responseStream} which yields events as follows: - For each {event} on {sourceStream}: - - Let {executionResult} be the result of running + - Let {response} be the result of running {ExecuteSubscriptionEvent(subscription, schema, variableValues, event)}. - - For each {response} yielded by {executionResult}: - - Yield an event containing {response}. + - Yield an event containing {response}. - When {responseStream} completes: complete this event stream. ExecuteSubscriptionEvent(subscription, schema, variableValues, initialValue): @@ -332,15 +300,19 @@ ExecuteSubscriptionEvent(subscription, schema, variableValues, initialValue): - Let {subscriptionType} be the root Subscription type in {schema}. - Assert: {subscriptionType} is an Object type. - Let {selectionSet} be the top level Selection Set in {subscription}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, +- Let {fieldsByTarget} be the result of calling + {AnalyzeSelectionSet(subscriptionType, selectionSet, variableValues)}. +- Let {groupedFieldSet} be the first entry in {fieldsByTarget}. +- Let {data} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, subscriptionType, initialValue, variableValues)} _normally_ (allowing parallelization). - Let {errors} be the list of all _field error_ raised while executing the - selection set. + {groupedFieldSet}. - Return an unordered map containing {data} and {errors}. Note: The {ExecuteSubscriptionEvent()} algorithm is intentionally similar to -{ExecuteQuery()} since this is how each event result is produced. +{ExecuteQuery()} since this is how each event result is produced. Incremental +delivery, however, is not supported within ExecuteSubscriptionEvent. #### Unsubscribe @@ -353,137 +325,853 @@ Unsubscribe(responseStream): - Cancel {responseStream} -## Yield Subsequent Payloads - -If an operation contains subsequent payload records resulting from `@stream` or -`@defer` directives, the {YieldSubsequentPayloads} algorithm defines how the -payloads should be processed. - -YieldSubsequentPayloads(initialResponse, subsequentPayloads): - -- Let {initialRecords} be any items in {subsequentPayloads} with a completed - {dataExecution}. -- Initialize {initialIncremental} to an empty list. -- For each {record} in {initialRecords}: - - Remove {record} from {subsequentPayloads}. - - If {isCompletedIterator} on {record} is {true}: - - Continue to the next record in {records}. - - Let {payload} be the completed result returned by {dataExecution}. - - Append {payload} to {initialIncremental}. -- If {initialIncremental} is not empty: - - Add an entry to {initialResponse} named `incremental` containing the value - {incremental}. -- Yield {initialResponse}. -- While {subsequentPayloads} is not empty: - - If a termination signal is received: - - For each {record} in {subsequentPayloads}: - - If {record} contains {iterator}: - - Send a termination signal to {iterator}. - - Return. - - Wait for at least one record in {subsequentPayloads} to have a completed - {dataExecution}. - - Let {subsequentResponse} be an unordered map with an entry {incremental} - initialized to an empty list. - - Let {records} be the items in {subsequentPayloads} with a completed - {dataExecution}. - - For each {record} in {records}: - - Remove {record} from {subsequentPayloads}. - - If {isCompletedIterator} on {record} is {true}: - - Continue to the next record in {records}. - - Let {payload} be the completed result returned by {dataExecution}. - - Append {payload} to the {incremental} entry on {subsequentResponse}. - - If {subsequentPayloads} is empty: - - Add an entry to {subsequentResponse} named `hasNext` with the value - {false}. - - Otherwise, if {subsequentPayloads} is not empty: - - Add an entry to {subsequentResponse} named `hasNext` with the value - {true}. - - Yield {subsequentResponse} - -## Executing Selection Sets - -To execute a selection set, the object value being evaluated and the object type -need to be known, as well as whether it must be executed serially, or may be -executed in parallel. - -First, the selection set is turned into a grouped field set; then, each -represented field in the grouped field set produces an entry into a response -map. - -ExecuteSelectionSet(selectionSet, objectType, objectValue, variableValues, path, -subsequentPayloads, asyncRecord): +## Incremental Delivery + +If an operation contains `@defer` or `@stream` directives, execution may also +result in an Subsequent Result stream in addition to the initial response. +Because execution of Subsequent Results may occur concurrently and prior to the +completion of their parent results, an Incremental Publisher may be used to +manage the ordering and potentially filtering of the Subsequent Result stream, +as described below. + +The Incremental Publisher is responsible for processing Execution Events +submitted to its queue, including introduction of potential new Subsequent +Results, exclusion of any Subsequent Results in case of null bubbling within the +parent, normal completion of Subsequent Results, and errors within Subsequent +Results that preclude their data from being sent. + +The Incremental Publisher provides an asynchronous iterator that resolves to the +Subsequent Result stream. + +Both the Execution algorithms and the Incremental Publisher service utilize +Incremental Result and Data Records to store required information. The records +are detailed below, including which entries are required for Execution itself +and which are required by the Incremental Publisher. + +### Incremental Delivery Records + +#### Incremental Publisher Records + +An Incremental Publisher Record is a structure containing: + +- {eventQueue}: a queue to which events can be pushed from multiple sources, + perhaps concurrently. +- {pending}: a list of pending notifications for Deferred Fragment or Stream + Records whose parent is the initial result, set after the Completed Initial + Result Event has been processed. +- {subsequentResults}: an iterator for the Subsequent Result stream, meaningful + only if pending is non-empty. + +Note: The Incremental Publisher may be run as a separate service. The input to +the Incremental Publisher are the Execution Events enqueued on {eventQueue}; the +output is the value for {pending}, as well as the {subsequentResults} stream. + +Note: The Incremental Publisher could be extended to provide additional +functionality. For example, for implementations intending to defer execution +until the parent result has completed, the Incremental Publisher could be +extended to allow subscribing to this event. Additionally, for implementations +choosing to ignore incremental delivery directives given a certain threshold of +already deferred fields, the Incremental Publisher could be extended to allow +querying for the present number of deferred fields. + +#### Incremental Result Records + +An Incremental Result Record is either an Initial Result Record or a Subsequent +Result Record. A Subsequent Result Record is either a Deferred Fragment Record +or a Stream Items Record. + +An Initial Result Record is a structure containing: + +- {id}: an implementation-specific value uniquely identifying this record, + created if not provided. + +A Deferred Fragment Record is a structure that always contains: + +- {id}: an implementation-specific value uniquely identifying this record, + created if not provided. + +Within the Incremental Publisher context, records of this type also include: + +- {label}: value derived from the corresponding `@defer` directive. +- {path}: a list of field names and indices from root to the location of the + corresponding `@defer` directive. +- {deferredGroupedFieldSets}: the set of Deferred Group Field Set Records that + comprise this Deferred Fragment Record, initialized to the empty set. +- {pendingDeferredGroupedFieldSetRecords}: the set of still pending Deferred + Group Field Set Records, initialized to the empty set. +- {errors}: a list of any unrecoverable errors encountered when attempting to + deliver this record, initialized to an empty list. + +A Stream Items Record is a structure that always contains: + +- {id}: an implementation-specific value uniquely identifying this record, + created if not provided. + +Within the Incremental Publisher context, records of this type also include: + +- {path}: a list of field names and indices from root to the location of the + corresponding list item contained by this Stream Items Record. +- {stream}: the Stream Record which this Stream Items Record partially fulfills. +- {items}: a list that will contain the streamed item, if the underlying + iterator produced a value. +- {errors}: a list of all _field error_ raised while completing the value + produced by the iterator. +- {isCompleted}: a boolean value indicating whether this record is complete, + initialized to {false}. +- {isCompletedIterator}: a boolean value indicating whether this record + represents completion of the iterator rather than any actual items. + +A Stream Record is a structure that always contains: + +- {id}: an implementation-specific value uniquely identifying this record, + created if not provided. + +Within the Execution context, records of this type also include: + +- {streamFieldGroup}: A Field Group record for completing stream items. +- {iterator}: The underlying iterator. + +Within the Incremental Publisher context, records of this type also include: + +- {label}: value derived from the corresponding `@stream` directive. +- {path}: a list of field names and indices from root to the location of the + corresponding `@stream` directive. +- {earlyReturn}: implementation-specific value denoting how to notify the + underlying iterator that no additional items will be requested. +- {pendingSent}: a boolean value indicating whether a pending notification for + this record has been sent. +- {errors}: a list of any unrecoverable errors encountered when attempting to + deliver this record, initialized to an empty list. + +#### Incremental Data Records + +An Incremental Data Record is either an Initial Result Record, a Deferred +Grouped Field Set Record or a Stream Items Record. + +A Deferred Grouped Field Set Record is a structure that always contains: + +- {id}: an implementation-specific value uniquely identifying this record, + created if not provided. + +Within the Execution context, records of this type also include: + +- {groupedFieldSet}: a Grouped Field Set to execute. +- {shouldInitiateDefer}: a boolean value indicating whether implementation + specific deferral of execution should be initiated. + +Within the Incremental Publisher context, records of this type also include: + +- {path}: a list of field names and indices from root to the location of this + deferred grouped field set. +- {deferredFragments}: a set of Deferred Fragment Records containing this + record. +- {data}: an ordered map that will contain the result of execution for this + fragment on completion, not defined until the record has been completed. +- {errors}: a list of all _field error_ raised while executing this record, not + defined until the record has been completed. + +Deferred Grouped Field Set Records may fulfill multiple Deferred Fragment +Records secondary to overlapping fields. Initial Result Records and Stream Items +Records always each fulfills a single result record and so they represents both +a unit of Incremental Data as well as an Incremental Result. + +### Execution Events + +#### New Deferred Fragment Event + +Required event details for New Deferred Fragment Events include: + +- {id}: string value identifying this Deferred Fragment. +- {label}: value derived from the corresponding `@defer` directive. +- {path}: a list of field names and indices from root to the location of the + corresponding `@defer` directive. +- {parentId}: string value identifying the parent incremental result record for + this Deferred Fragment. + +#### New Deferred Grouped Field Set Event + +Required event details for New Deferred Grouped Field Set Event include: + +- {id}: string value identifying this Deferred Grouped Field Set. +- {path}: a list of field names and indices from root to the location of this + deferred grouped field set. +- {fragmentIds}: list of string values identifying the Deferred Fragments + containing this Deferred Grouped Field Set. + +#### Completed Deferred Grouped Field Set Event + +Required event details for Completed Deferred Grouped Field Set Events include: + +- {id}: string value identifying this Deferred Grouped Field Set. +- {data}: ordered map represented the completed data for this Deferred Grouped + Field Set. +- {errors}: the list of _field error_ for this Deferred Grouped Field Set. + +#### Errored Deferred Grouped Field Set Event + +Required event details for Errored Deferred Grouped Field Set Event include: + +- {id}: string value identifying this Deferred Grouped Field Set. +- {errors}: The _field error_ causing the entire Deferred Grouped Field Set to + error. + +#### New Stream Event + +Required event details for New Stream Events include: + +- {id}: string value identifying this Stream. +- {label}: value derived from the corresponding `@stream` directive. +- {path}: a list of field names and indices from root to the location of the + corresponding `@stream` directive. +- {earlyReturn}: implementation-specific value denoting how to handle early + return of the stream. + +#### New Stream Items Event + +Required event details for New Stream Items Event include: + +- {id}: string value identifying these Stream Items. +- {streamId}: string value identifying the Stream +- {parentIds}: string value identifying the parent incremental data results for + these Stream Items. + +#### Completed Stream Items Event + +Required event details for Completed Stream Items Event include: + +- {id}: string value identifying these Stream Items. +- {items}: the list of items. +- {errors}: the list of _field error_ for these items. + +#### Completed Empty Stream Items Event + +Required event details for Completed Empty Stream Items Events include: + +- {id}: string value identifying these Stream Items. + +#### Errored Stream Items Event + +Required event details for Errored Stream Items Events include: + +- {id}: string value identifying these Stream Items. +- {errors}: the _field error_ causing these items to error. + +#### Completed Initial Result Event + +Required event details for Completed Initial Result Events include: + +- {id}: string value identifying this Initial Result. + +#### Field Error Event + +Required event details for Field Error Events include: + +- {id}: string value identifying the Initial Result, Deferred Grouped Field Set + or Stream Items from which the _field error_ originates. +- {nullPath}: a list of field names and indices from root to the location of the + error. + +### Creating the Incremental Publisher + +The Incremental Publisher manages a queue of incoming Execution Events and is +responsible when necessary for emitting a stream of Subsequent Results. The +incoming Execution Events can be handled synchronously, while the stream of +Subsequent Results is emitted asynchronously as necessary. The Incremental +Publisher Record itself exposes the {eventQueue} to the Execution algorithms. + +After the Completed Initial Result is processed, if additions results are +pending, these will be listed within the {pending} entry on the Incremental +Publisher Record for inclusion within the initial result, and the +{subsequentResults} iterator will resolve to the stream of all Subsequent +Results. + +CreateIncrementalPublisher(): + +- Perform the following initial steps: + + - Let {incrementalPublisher} be a new Incremental Publisher Record. + - Set {eventQueue} on {incrementalPublisher} to an empty queue. + - Initialize {subsequentResultMap} to a map of Incremental Result ids to + Subsequent Result Records. + - Initialize {deferredFragmentMap} to a map of Deferred Fragment ids to + Deferred Fragment Records. + - Initialize {deferredGroupedFieldSetMap} to a map of Deferred Fragment ids to + Deferred Fragment Records. + - Initialize {streamMap} to a map of Stream ids to Stream Records. + - Initialize {streamItemsMap} to a map of Stream Items ids to Stream Items + Records. + - Initialize {pendingSubsequentResults} to the empty set. + - Initialize {completedSubsequentResults} to the empty set. + - Initialize {newPending} to the empty set. + - Set {initialResultCompleted} to {false}. + - Set {allResultsCompleted} to {false}. + +- Define the sub-procedure {ReleaseSubsequentResult(subsequentResult)} as + follows: + + - Add {subsequentResult} to {pendingSubsequentResults}. + - If {subsequentResult} is a Stream Items Record: + - Let {stream} be the corresponding entry on {subsequentResult}. + - Let {pendingSent} be the corresponding entry on {stream}. + - If {pendingSent} is {false}: + - Add {stream} to {newPending}. + - Set {pendingSent} on {stream} to {true}. + - Let {isCompleted} be the corresponding entry on {subsequentResult}. + - If {isCompleted} is {true}: + - Add {subsequentResult} to {completedSubsequentResults}. + - Otherwise: + - Add {subsequentResult} to {newPending}. + - Let {pendingDeferredGroupedFieldSets} be the corresponding entry on + {subsequentResult}. + - If {pendingDeferredGroupedFieldSets} is empty, add {subsequentResult} to + {completedSubsequentResults}. + +- Define the sub-procedure {HandleCompletedInitialResultEvent(id, children)} as + follows: + + - Let {id} be the corresponding entry on {eventDetails}. + - Let {children} be the corresponding entry on {subsequentResultMap} for {id}. + - Delete the entry on {subsequentResultMap} for {id}. + - For each {child} in {children}: + - Call {ReleaseSubsequentResult(child)}. + - Set {pending} on {incrementalPublisher} to + {PendingSourcesToResults(newPending)}. + - Set {initialResultCompleted} to {true}. + +- Define the sub-procedure {HandleNewDeferredFragmentEvent(id, label, path, + parentId)} as follows: + + - Let {deferredFragment} be a new Deferred Fragment Record created from {id}, + {label}, and {path}. + - Set the entry for {id} in {deferredFragmentMap} to {deferredFragment}. + - Let {subsequentResult} be the entry in {subsequentResultMap} for {parentId}. + - If {subsequentResult} is not defined: + - Initialize {subsequentResult} to an empty unordered map. + - Initialize {children} to the empty set. + - Otherwise: + - Let {children} be the corresponding entry on {subsequentResult}. + - Add {subsequentResult} to {children}. + +- Define the sub-procedure {HandleNewDeferredGroupedFieldSetEvent(id, path, + fragmentIds)} as follows: + + - Initialize {deferredFragments} to an empty list. + - For each {fragmentId} in {fragmentIds}: + - Let {deferredFragment} be the entry in {deferredFragmentMap} for + {fragmentId}. + - Let {deferredGroupedFieldSets} and {pendingDeferredGroupedFieldSets} be + the corresponding entries on {deferredFragment}. + - Add {deferredFragment} to each of {deferredGroupedFieldSets} and + {pendingDeferredGroupedFieldSets}. + - Append {deferredFragment} to {deferredFragments}. + - Let {deferredGroupedFieldSet} be a new Deferred Grouped Field Set Record + created from {id}, {path}, and {deferredFragments}. + - Set the entry for {id} on {deferredGroupedFieldSetMap} to + {deferredGroupedFieldSet}. + +- Define the sub-procedure {HandleCompletedDeferredGroupedFieldSetEvent(id, + data, errors)} as follows: + + - Let {deferredGroupedFieldSet} be the entry on {deferredGroupedFieldSetMap} + for {id}. + - Set the corresponding entries on {deferredGroupedFieldSet} to {data} and + {errors}. + - Let {deferredFragments} be the corresponding entry on + {deferredGroupedFieldSet}. + - For each {deferredFragment} in {deferredFragments}: + - Let {pendingDeferredGroupedFieldSets} be the corresponding entry on + {deferredFragment}. + - Remove {deferredGroupedFieldSet} from {pendingDeferredGroupedFieldSets}. + - If {pendingDeferredGroupedFieldSets} is empty and + {pendingSubsequentResults} contains {deferredFragment}: + - Add {deferredFragment} to {completedSubsequentResults}. + +- Define the sub-procedure {HandleErroredDeferredGroupedFieldSetEvent(id, + errors)} as follows: + + - Let {deferredGroupedFieldSet} be the entry for {id} on + {deferredGroupedFieldSetMap}. + - Let {deferredFragments} be the corresponding entry on + {deferredGroupedFieldSet}. + - For each {deferredFragment} in {deferredFragments}: + - For each {error} in {errors}: + - Append {error} to the list of {errors} on {deferredFragment}. + - If {pendingSubsequentResults} contains {deferredFragment}: + - Add {deferredFragment} to {completedSubsequentResults}. + +- Define the sub-procedure {HandleNewStreamEvent(id, label, path, earlyReturn)} + as follows: + + - Let {stream} be a new Stream Record created from {id}, {path}, and + {earlyReturn}. + - Set the entry for {id} on {streamMap} to {stream}. + +- Define the sub-procedure {HandleNewStreamItemsEvent(id, streamIds, parentIds)} + as follows: + + - Let {stream} be the entry in {streamMap} for {streamId}. + - Let {streamItems} be a new Stream Items record created from {id} and + {stream}. + - Set the entry for {id} on {streamItemsMap} to {streamItems}. + - For each {parentId} in {parentIds}: + - Let {subsequentResult} be the map in {allSubsequentResults} for + {parentId}. + - If {subsequentResult} is not defined: + - Initialize {subsequentResult} to an empty unordered map. + - Initialize {children} to the empty set. + - Otherwise: + - Let {children} be the corresponding entry on {subsequentResult}. + - Add {subsequentResult} to {children}. + +- Define the sub-procedure {HandleCompletedStreamItemsEvent(id, items, errors)} + as follows: + + - Let {streamItems} be the entry on {streamItemsMap} for {id}. + - Set the corresponding entries on {streamItems} to {items} and {errors}. + - Set {isCompleted} on {streamItems} to {true}. + - If {pendingSubsequentResults} contains {streamItems}: + - Add {streamItems} to {completedSubsequentResults}. + +- Define the sub-procedure {HandleCompletedEmptyStreamItemEvent(id)} as follows: + + - Let {streamItems} be the entry on {streamItemsMap} for {id}. + - Set {isCompletedIterator} on {streamItems} to {true}. + - Set {isCompleted} on {streamItems} to {true}. + - If {pendingSubsequentResults} contains {streamItems}: + - Add {streamItems} to {completedSubsequentResults}. + +- Define the sub-procedure {HandleErroredStreamItemEvent(id, errors)} as + follows: + + - Let {streamItems} be the entry on {streamItemsMap} for {id}. + - Set the corresponding entries on {streamItems} to {items} and {errors}. + - Let {stream} be the corresponding entry on {streamItems}. + - For each {error} in {errors}: + - Append {error} to the list of {errors} on {stream}. + - If {pendingSubsequentResults} contains {streamItems}: + - Add {streamItems} to {completedSubsequentResults}. + +- Define the sub-procedure {HandleFieldErrorEvent(id, nullPath)} as follows: + + - Let {children} be the result of {GetChildren(subsequentResultMap, id)}. + - Let {descendants} be the result of {GetDescendants(children)}. + - Let {streams} be an empty set of Stream Records. + - For each {descendant} in {descendants}: + - If {NullsSubsequentResultRecord(descendant, nullPath)} is not {true}: + - Continue to the next {descendant} in {descendants}. + - Let {id} be the corresponding entry on {descendant}. + - Delete the entry for {id} on {subsequentResultMap}. + - If {descendant} is a Stream Items Record: + - Add {stream} to {streams}. + - Let {streamId} be the entry for {id} on {stream}. + - Delete the entry for {streamId} on {streamMap}. + - Delete the entry for {id} on {streamItemsMap}. + - Otherwise: + - Delete the entry for {id} on {deferredFragmentMap}. + - Let {deferredGroupedFieldSets} be the corresponding entry on + {descendant}. + - For each {deferredGroupedFieldSet} in {deferredGroupedFieldSets}: + - Let {deferredFragments} be the corresponding entry for + {deferredGroupedFieldSet}. + - Remove {descendant} from {deferredFragments}. + - If {deferredFragments} is empty: + - Let {id} be the corresponding entry on {deferredGroupedFieldSet}. + - Delete the entry for {id} on {deferredGroupedFieldSetMap}. + - For each {stream} in {streams}: + - Let {earlyReturn} be the corresponding entry on {stream}. + - As specified by the implementation-specific value within {earlyReturn}, + notify the underlying iterator that no additional items will be requested. + +- Define the sub-procedure {HandleExecutionEvent(eventType, eventDetails)} as + follows: + + - If {eventType} is a Completed Initial Result Event: + - Let {id} be the corresponding entry on {eventDetails}. + - Call {HandleCompletedInitialResultEvent(id)}. + - If {eventType} is a New Deferred Fragment Event: + - Let {id}, {label}, {path}, and {parentId} be the corresponding entries on + {eventDetails}. + - Call {HandleNewDeferredFragmentEvent(id, label, path, parentId)}. + - If {eventType} is a New Deferred Grouped Field Set Event: + - Let {id}, {path}, and {fragmentIds} be the corresponding entries on + {eventDetails}. + - Call {HandleNewDeferredGroupedFieldSetEvent(id, path, fragmentIds)}. + - If {eventType} is a Completed Deferred Grouped Field Set Event: + - Let {id}, {data} and {errors} be the corresponding entries on + {eventDetails}. + - Call {HandleCompletedDeferredGroupedFieldSetEvent(id, data, errors)}. + - If {eventType} is an Errored Deferred Grouped Field Set Event: + - Let {id} and {errors} be the corresponding entries on {eventDetails}. + - Call {HandleErroredDeferredGroupedFieldSetEvent(id, data, errors)}. + - If {eventType} is a New Stream Event: + - Let {id}, {label}, {path}, and {earlyReturn} be the corresponding entries + on {eventDetails}. + - Call {HandleNewStreamEvent(id, label, path, earlyReturn)}. + - If {eventType} is a New Stream Items Event: + - Let {id}, {streamId}, and {parentIds} be the corresponding entries on + {eventDetails}. + - Call {HandleNewStreamItemEvent(id, streamId, parentIds)}. + - If {eventType} is a Completed Stream Items Event: + - Let {id}, {items} and {errors} be the corresponding entries on + {eventDetails}. + - Call {HandleCompletedStreamItemsEvent(id, items, errors)}. + - If {eventType} is an Completed Empty Stream Items Event: + - Let {id} be the corresponding entry on {eventDetails}. + - Call {HandleCompletedEmptyStreamItemEvent(id)}. + - If {eventType} is an Errored Stream Items Event: + - Let {id} and {errors} be the corresponding entries on {eventDetails}. + - Call {HandleErroredStreamItemEvent(id, errors)}. + - If {eventType} is a Field Error Event: + - Let {id} and {nullPath} be the corresponding entries on {eventDetails}. + - Call {HandleFieldErrorEvent(id, nullPath)}. + +- Define the sub-procedure YieldSubsequentResults() as follows: + + - Wait for {initialResultCompleted} to be set to {true}. + - Repeat the following steps: + - If a termination signal was received: + - For each {stream} in {streamMap}: + - Let {earlyReturn} be the corresponding entry on {stream}. + - As specified by the implementation-specific value within + {earlyReturn}, notify the underlying iterator that no additional items + will be requested. + - Return. + - Clear {newPending} and re-initialize it to the empty set. + - Initialize {incrementalResults} to an empty list. + - Initialize {completedRecords} to the empty set. + - While {completedSubsequentResults} is not empty: + - Initialize {currentBatch} to the empty set. + - For each {subsequentResult} in {completedSubsequentResults}: + - Add {subsequentResult} to {currentBatch}. + - Remove {subsequentResult} from both {completedSubsequentResults} and + {pendingSubsequentResults}. + - For each {subsequentResult} in {currentBatch}: + - Let {id} be the corresponding entry on {subsequentResult} + - Let {children} be the corresponding entry on {subsequentResultMap} for + {id}. + - Delete the entry on {subsequentResultMap} for {id}. + - For each {child} in {children}: + - Call {ReleaseSubsequentResult(child)}. + - If {subsequentResult} is a Stream Items Record: + - Let {id} be the corresponding entry on {subsequentResult}. + - Delete the entry for {id} on {streamItemsMap}. + - Let {isCompletedIterator} be the corresponding entry on + {subsequentResult}. + - Let {stream} be the corresponding entry on {subsequentResult}. + - Let {streamErrors} be the entry for {errors} on {stream}. + - If {streamErrors} is not empty or if {isCompletedIterator} is + {true}: + - Remove {stream} from {newPending}, if present. + - Add {stream} to {completedRecords}. + - Delete the entry for {streamId} on {streamItemsMap}. + - Continue to the next {subsequentResult} in {currentBatch}. + - Let {items} and {errors} be the corresponding entries on + {subsequentResult}. + - Let {incrementalResult} be an unordered map containing {items}. + - If {errors} is not empty: + - Set the corresponding entry on {incrementalResult} to {errors}. + - Append {incrementalResult} to {incrementalResults}. + - Otherwise: + - Let {id} be the corresponding entry on {subsequentResult}. + - Delete the entry for {id} on {deferredFragmentMap}. + - Remove {subsequentResult} from {newPending}, if present. + - Let {errors} be the corresponding entry on {subsequentResult}. + - If {errors} is not empty: + - Add {subsequentResult} to {completedRecords}. + - Let {deferredGroupedFieldSets} be the corresponding entry on + {subsequentResult}. + - For each {deferredGroupedFieldSet} in {deferredGroupedFieldSets}: + - Let {deferredFragments} be the corresponding entry on + {deferredGroupedFieldSet}. + - Remove {subsequentResult} from {deferredFragments}. + - If {deferredFragments} is empty: + - Delete the entry for {id} on {deferredGroupedFieldSetMap}. + - Continue to the next {subsequentResult} in {currentBatch}. + - Add {subsequentResult} to {completedRecords}. + - Let {deferredGroupedFieldSets} be the corresponding entry on + {subsequentResult}. + - For each {deferredGroupedFieldSet} in {deferredGroupedFieldSets}: + - Let {id} be the corresponding entry on {deferredGroupedFieldSet}. + - If {deferredGroupedFieldSetMap} does not include an entry for + {id}: + - This Deferred Grouped Field Set has already been sent, continue + to the next {subsequentResult} in {currentBatch}. + - Delete the entry for {id} on {deferredGroupedFieldSetMap}. + - Let {data} and {errors} be the corresponding entries on + {deferredGroupedFieldSet}. + - Let {incrementalResult} be an unordered map containing {data}. + - If {errors} is not empty: + - Set the corresponding entry on {incrementalResult} to {errors}. + - Append {incrementalResult} to {incrementalResults}. + - Let {hasNext} be {true} if {pendingSubsequentResults} is empty; otherwise, + let it be {false}. + - Let {subsequentResponse} be an unordered map containing {hasNext}. + - If {newPending} is not empty: + - Set the corresponding entry on {subsequentResponse} to + {PendingSourcesToResults(newPending)}. + - If {incrementalResults} is not empty: + - Set the {incremental} entry on {subsequentResponse} to + {incrementalResults}. + - If {completedRecords} is not empty: + - Set the corresponding entry on {subsequentResponse} to + {CompletedRecordsToResults(completedRecords)}. + - Yield {subsequentResponse}. + - If {pendingSubsequentResults} is empty: + - Set {allResultsCompleted} to {true}. + - Return. + - If {completedSubsequentResults} is empty: + - Wait until additional event(s) are pushed to the queue. + +- Set up the event handler and the iterator as follows: + + - For each event of pushed to {eventQueue} of type {eventType} described by + {eventDetails}, in order of its entry into the queue: + - Remove the event from the queue. + - Call {HandleExecutionEvent(eventType, eventDetails)}. + - Wait for the next event or for {allResultsCompleted} to be set to {true}. + - If {allResultsCompleted} is {true}, return. + - In parallel, set {subsequentResults} on {incrementalPublisher} to the result + of lazily executing {YieldSubsequentResults()}. + +- Return {incrementalPublisher}. + +The below sub-procedures are used by {CreateIncrementalPublisher()} and its +sub-procedures, but are listed separately as they do not modify the Incremental +Publisher's internal state: + +PendingSourcesToResults(pendingSources): + +- Initialize {pendingResults} to an empty list. +- For each {pendingSource} in {pendingSources}: + - Let {path} and {label} be the corresponding entries on {pendingSource}. + - Let {pendingResult} be an unordered map containing {path} and {label}. + - Append {pendingResult} to {pendingResults}. +- Return {pendingResults}. + +GetChildren(subsequentResultMap, id): + +- Let {children} be the empty set. +- If {deferredGroupedFieldSetMap} contains an entry for {id}: + - Let {deferredGroupedFieldSet} be that entry. + - Let {deferredFragments} be the corresponding entry on + {deferredGroupedFieldSet}. + - For each {deferredFragment} in {deferredFragments}: + - Let {id} be the corresponding entry on {deferredFragment}. + - Let {resultChildren} be the entry in {subsequentResultMap} for {id}. + - For each {child} in {resultChildren}: + - Add {child} to {children}. +- Otherwise: + - Let {resultChildren} be the entry in {subsequentResultMap} for {id}. + - For each {child} in {resultChildren}: + - Add {child} to {children}. +- Return {children} + +GetDescendants(children, descendants): + +- If {descendants} is not provided, let it be the empty set. +- For each {child} in {children}: + - Add {child} to {descendants}. + - Let {grandchildren} be the value for {children} on {child}. + - Call {GetDescendants(grandchildren, descendants)}. +- Return {descendants}. + +NullsSubsequentResultRecord(subsequentResult, nullPath): + +- If {subsequentResult} is a Stream Items Record: + - Let {incrementalDataRecords} be a list containing {subsequentResult}. +- Otherwise: + - Let {incrementalDataRecords} be the value corresponding the entry for + {deferredGroupedFieldSets} on {subsequentResult}. +- Let {matched} equal {false}. +- For each {incrementalDataRecord} in {incrementalDataRecords}: + - Let {path} be the corresponding entry on {incrementalDataRecord}. + - If {MatchesPath(path, nullPath)} is {true}: + - Set {matched} equal to {true}. + - Optionally, cancel any incomplete work in the execution of + {incrementalDataRecord}. +- Return {matched}. + +MatchesPath(testPath, basePath): + +- Initialize {index} to zero. +- While {index} is less then the length of {basePath}: + - Initialize {basePathItem} to the element at {index} in {basePath}. + - Initialize {testPathItem} to the element at {index} in {testPath}. + - If {basePathItem} is not equivalent to {testPathItem}: + - Return {true}. + - Increment {index} by one. + - Return {false}. + +CompletedRecordsToResults(records): + +- Initialize {completedResults} to an empty list. +- For each {record} in {records}: + - Let {path}, {label}, and {errors} be the corresponding entries on {record}. + - Let {completedResult} be an unordered map containing {path} and {label}. + - If {errors} is not empty, set the corresponding entry on {completedResult} + to {errors}. + - Append {completedResult} to {completedResults}. +- Return {completedResults}. + +## Executing the Root Selection Set + +To execute the root selection set, the object value being evaluated and the +object type need to be known, as well as whether it must be executed serially, +or may be executed in parallel. + +Executing the root selection set works similarly for queries (parallel), +mutations (serial), and subscriptions (where it is executed for each event in +the underlying Source Stream). + +First, the selection set is turned into a grouped field set; then, we execute +this grouped field set and return the resulting {data} and {errors}. + +ExecuteRootSelectionSet(variableValues, initialValue, objectType, selectionSet, +serial): + +- Let {fieldsByTarget}, {targetsByKey}, and {newDeferUsages} be the result of + calling {AnalyzeSelectionSet(objectType, selectionSet, variableValues)}. +- Let {groupedFieldSet}, {newGroupedFieldSetDetails} be the result of calling + {BuildGroupedFieldSets(fieldsByTarget, targetsByKey)}. +- Let {incrementalPublisher} be the result of {CreateIncrementalPublisher()}. +- Let {newDeferMap} be the result of {AddNewDeferFragments(incrementalPublisher, + newDeferUsages, incrementalDataRecord)}. +- Let {newDeferredGroupedFieldSets} be the result of + {AddNewDeferredGroupedFieldSets(incrementalPublisher, + newGroupedFieldSetDetails, newDeferMap)}. +- Let {initialResultRecord} be a new Initial Result Record. +- Let {data} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, + queryType, initialValue, variableValues, incrementalPublisher, + initialResultRecord)} _serially_ if {serial} is {true}, _normally_ (allowing + parallelization) otherwise. +- In parallel, call {ExecuteDeferredGroupedFieldSets(queryType, initialValues, + variableValues, incrementalPublisher, newDeferredGroupedFieldSets, + newDeferMap)}. +- Let {id} be the corresponding entry on {initialResultRecord}. +- Let {errors} be the list of all _field error_ raised while executing the + {groupedFieldSet}. +- Initialize {initialResult} to an empty unordered map. +- If {errors} is not empty: + - Set the corresponding entry on {initialResult} to {errors}. +- Set {data} on {initialResult} to {data}. +- Enqueue a Completed Initial Result Event on {eventQueue} with {id}. +- Let {pending} be the corresponding entry on {incrementalPublisher}. +- Wait for {pending} to be set. +- If {pending} is empty, return {initialResult}. +- Let {hasNext} be {true}. +- Set the corresponding entries on {initialResult} to {pending} and {hasNext}. +- Let {subsequentResults} be the corresponding entry on {incrementalPublisher}. +- Return {initialResult} and {subsequentResults}. + +AddNewDeferFragments(incrementalPublisher, newDeferUsages, +incrementalDataRecord, deferMap, path): + +- Initialize {newDeferredGroupedFieldSets} to an empty list. +- If {newDeferUsages} is empty: + - Let {newDeferMap} be {deferMap}. +- Otherwise: + - Let {newDeferMap} be a new empty unordered map of Defer Usage records to + Deferred Fragment records. + - For each {deferUsage} and {deferredFragment} in {deferMap}. + - Set the entry for {deferUsage} in {newDeferMap} to {deferredFragment}. +- Let {eventQueue} be the corresponding entry on {incrementalPublisher}. +- For each {deferUsage} in {newDeferUsages}: + - Let {label} be the corresponding entry on {deferUsage}. + - Let {parent} be (GetParentTarget(deferUsage, deferMap, + incrementalDataRecord)). + - Let {parentId} be the entry for {id} on {parent}. + - Let {deferredFragment} be a new Deferred Fragment Record. + - Let {id} be the corresponding entry on {deferredFragment}. + - Enqueue a New Deferred Fragment Event on {eventQueue} with details {label}, + {path}, and {parentId}. + - Set the entry for {deferUsage} in {newDeferMap} to {deferredFragment}. + - Return {newDeferMap}. + +GetParentTarget(deferUsage, deferMap, incrementalDataRecord): + +- Let {ancestors} be the corresponding entry on {deferUsage}. +- Let {parentDeferUsage} be the first member of {ancestors}. +- If {parentDeferUsage} is not defined, return {incrementalDataRecord}. +- Let {parentRecord} be the corresponding entry in {deferMap} for + {parentDeferUsage}. +- Return {parentRecord}. + +AddNewDeferredGroupedFieldSets(incrementalPublisher, newGroupedFieldSetDetails, +deferMap, path): + +- Initialize {newDeferredGroupedFieldSets} to an empty list. +- For each {deferUsageSet} and {groupedFieldSetDetails} in + {newGroupedFieldSetDetails}: + - Let {groupedFieldSet} and {shouldInitiateDefer} be the corresponding entries + on {groupedFieldSetDetails}. + - Let {deferredGroupedFieldSet} be a new Deferred Grouped Field Set Record + created from {groupedFieldSet} and {shouldInitiateDefer}. + - Let {deferredFragments} be the result of + {GetDeferredFragments(deferUsageSet, newDeferMap)}. + - Let {fragmentIds} be an empty list. + - For each {deferredFragment} in {deferredFragments}: + - Let {id} be the corresponding entry on {deferredFragment}. + - Append {id} to {fragmentIds}. + - Let {id} be the corresponding entry on {deferredGroupedFieldSet}. + - Let {eventQueue} be the corresponding entry on {incrementalPublisher}. + - Enqueue a New Deferred Grouped Field Set Event on {eventQueue} with details + {id}, {path}, and {fragmentIds}. + - Append {deferredGroupedFieldSet} to {newDeferredGroupedFieldSets}. +- Return {newDeferredGroupedFieldSets}. + +GetDeferredFragments(deferUsageSet, deferMap): + +- Let {deferredFragments} be an empty list of Deferred Fragment records. +- For each {deferUsage} in {deferUsageSet}: + - Let {deferredFragment} be the entry for {deferUsage} in {deferMap}. + - Append {deferredFragment} to {deferredFragments}. +- Return {deferredFragments}. + +## Executing a Grouped Field Set + +To execute a grouped field set, the object value being evaluated and the object +type need to be known, as well as whether it must be executed serially, or may +be executed in parallel. + +ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, variableValues, +path, deferMap, incrementalPublisher, incrementalDataRecord): - If {path} is not provided, initialize it to an empty list. -- If {subsequentPayloads} is not provided, initialize it to the empty set. -- Let {groupedFieldSet} and {deferredGroupedFieldsList} be the result of - {CollectFields(objectType, selectionSet, variableValues)}. - Initialize {resultMap} to an empty ordered map. -- For each {groupedFieldSet} as {responseKey} and {fields}: - - Let {fieldName} be the name of the first entry in {fields}. Note: This value - is unaffected if an alias is used. +- For each {groupedFieldSet} as {responseKey} and {fieldGroup}: + - Let {fieldDetails} be the first entry in {fieldGroup}. + - Let {node} be the corresponding entry on {fieldDetails}. + - Let {fieldName} be the name of {node}. Note: This value is unaffected if an + alias is used. - Let {fieldType} be the return type defined for the field {fieldName} of {objectType}. - If {fieldType} is defined: - Let {responseValue} be {ExecuteField(objectType, objectValue, fieldType, - fields, variableValues, path, subsequentPayloads, asyncRecord)}. + fieldGroup, variableValues, path, incrementalPublisher, + incrementalDataRecord)}. - Set {responseValue} as the value for {responseKey} in {resultMap}. -- For each {deferredGroupFieldSet} and {label} in {deferredGroupedFieldsList} - - Call {ExecuteDeferredFragment(label, objectType, objectValue, - deferredGroupFieldSet, path, variableValues, asyncRecord, - subsequentPayloads)} - Return {resultMap}. Note: {resultMap} is ordered by which fields appear first in the operation. This -is explained in greater detail in the Field Collection section below. +is explained in greater detail in the Selection Set Analysis section below. **Errors and Non-Null Fields** -If during {ExecuteSelectionSet()} a field with a non-null {fieldType} raises a -_field error_ then that error must propagate to this entire selection set, +If during {ExecuteGroupedFieldSet()} a field with a non-null {fieldType} raises +a _field error_ then that error must propagate to this entire grouped field set, either resolving to {null} if allowed or further propagated to a parent field. If this occurs, any sibling fields which have not yet executed or have not yet yielded a value may be cancelled to avoid unnecessary work. -Additionally, async payload records in {subsequentPayloads} must be filtered if -their path points to a location that has resolved to {null} due to propagation -of a field error. This is described in -[Filter Subsequent Payloads](#sec-Filter-Subsequent-Payloads). These async -payload records must be removed from {subsequentPayloads} and their result must -not be sent to clients. If these async records have not yet executed or have not -yet yielded a value they may also be cancelled to avoid unnecessary work. +All raised field errors should also be enqueued as Field Error Events including +the {id} of the originating Incremental Data Record and the path to the final +propagated {null}. -Note: See [Handling Field Errors](#sec-Handling-Field-Errors) for more about -this behavior. - -### Filter Subsequent Payloads - -When a field error is raised, there may be async payload records in -{subsequentPayloads} with a path that points to a location that has been removed -or set to null due to null propagation. These async payload records must be -removed from subsequent payloads and their results must not be sent to clients. - -In {FilterSubsequentPayloads}, {nullPath} is the path which has resolved to null -after propagation as a result of a field error. {currentAsyncRecord} is the -async payload record where the field error was raised. {currentAsyncRecord} will -not be set for field errors that were raised during the initial execution -outside of {ExecuteDeferredFragment} or {ExecuteStreamField}. - -FilterSubsequentPayloads(subsequentPayloads, nullPath, currentAsyncRecord): - -- For each {asyncRecord} in {subsequentPayloads}: - - If {asyncRecord} is the same record as {currentAsyncRecord}: - - Continue to the next record in {subsequentPayloads}. - - Initialize {index} to zero. - - While {index} is less then the length of {nullPath}: - - Initialize {nullPathItem} to the element at {index} in {nullPath}. - - Initialize {asyncRecordPathItem} to the element at {index} in the {path} - of {asyncRecord}. - - If {nullPathItem} is not equivalent to {asyncRecordPathItem}: - - Continue to the next record in {subsequentPayloads}. - - Increment {index} by one. - - Remove {asyncRecord} from {subsequentPayloads}. Optionally, cancel any - incomplete work in the execution of {asyncRecord}. +Additionally, unpublished Subsequent Result records must be filtered if their +path points to a location that has resolved to {null} due to propagation of a +field error. If these subsequent results have not yet executed or have not yet +yielded a value they may also be cancelled to avoid unnecessary work. For example, assume the field `alwaysThrows` is a `Non-Null` type that always raises a field error: @@ -510,6 +1198,9 @@ be cancelled. } ``` +Note: See [Handling Field Errors](#sec-Handling-Field-Errors) for more about +this behavior. + ### Normal and Serial Execution Normally the executor can execute the entries in a grouped field set in whatever @@ -609,23 +1300,18 @@ A correct executor must generate the following result for that selection set: When subsections contain a `@stream` or `@defer` directive, these subsections are no longer required to execute serially. Execution of the deferred or streamed sections of the subsection may be executed in parallel, as defined in -{ExecuteStreamField} and {ExecuteDeferredFragment}. +{ExecuteDeferredGroupedFieldSets} and {ExecuteStreamField}. -### Field Collection +### Selection Set Analysis Before execution, the selection set is converted to a grouped field set by -calling {CollectFields()}. Each entry in the grouped field set is a list of -fields that share a response key (the alias if defined, otherwise the field -name). This ensures all fields with the same response key (including those in -referenced fragments) are executed at the same time. A deferred selection set's -fields will not be included in the grouped field set. Rather, a record -representing the deferred fragment and additional context will be stored in a -list. The executor revisits and resumes execution for the list of deferred -fragment records after the initial execution is initiated. This deferred -execution would ‘re-execute’ fields with the same response key that were present -in the grouped field set. - -As an example, collecting the fields of this selection set would collect two +calling {AnalyzeSelectionSet()} and {BuildGroupedFieldSets()}. Each entry in the +grouped field set is a Field Group record describing all fields that share a +response key (the alias if defined, otherwise the field name). This ensures all +fields with the same response key (including those in referenced fragments) are +executed at the same time. + +As an example, analysis of the fields of this selection set would return two instances of the field `a` and one of field `b`: ```graphql example @@ -644,17 +1330,63 @@ fragment ExampleFragment on Query { } ``` -The depth-first-search order of the field groups produced by {CollectFields()} -is maintained through execution, ensuring that fields appear in the executed -response in a stable and predictable order. - -CollectFields(objectType, selectionSet, variableValues, visitedFragments, -deferredGroupedFieldsList): - -- If {visitedFragments} is not provided, initialize it to the empty set. -- Initialize {groupedFields} to an empty ordered map of lists. -- If {deferredGroupedFieldsList} is not provided, initialize it to an empty - list. +The depth-first-search order of the field groups produced by selection set +processing is maintained through execution, ensuring that fields appear in the +executed response in a stable and predictable order. + +{AnalyzeSelectionSet()} also returns a list of references to any new deferred +fragments encountered the selection set. {BuildGroupedFieldSets()} also +potentially returns additional deferred grouped field sets related to new or +previously encountered deferred fragments. Additional grouped field sets are +constructed carefully so as to ensure that each field is executed exactly once +and so that fields are grouped according to the set of deferred fragments that +include them. + +Information derived from the presence of a `@defer` directive on a fragment is +returned as a Defer Usage record, unique to the label, a structure containing: + +- {label}: value of the corresponding argument to the `@defer` directive. +- {ancestors}: a list, where the first entry is the parent Defer Usage record + corresponding to the deferred fragment enclosing this deferred fragment and + the remaining entries are the values included within the {ancestors} entry of + that parent Defer Usage record, or, if this Defer Usage record is deferred + directly by the initial result, a list containing the single value + {undefined}. + +A Field Group record is a structure containing: + +- {fields}: a list of Field Details records for each encountered field. +- {targets}: the set of Defer Usage records corresponding to the deferred + fragments enclosing this field, as well as possibly the value {undefined} if + the field is included within the initial response. + +A Field Details record is a structure containing: + +- {node}: the field node itself. +- {target}: the Defer Usage record corresponding to the deferred fragment + enclosing this field or the value {undefined} if the field was not deferred. + +Additional deferred grouped field sets are returned as Grouped Field Set Details +records which are structures containing: + +- {groupedFieldSet}: the grouped field set itself. +- {shouldInitiateDefer}: a boolean value indicating whether the executor should + defer execution of {groupedFieldSet}. + +Deferred grouped field sets do not always require initiating deferral. For +example, when a parent field is deferred by multiple fragments, deferral is +initiated on the parent field. New grouped field sets for child fields will be +created if the child fields are not all present in all of the deferred +fragments, but these new grouped field sets, while representing deferred fields, +do not require additional deferral. + +AnalyzeSelectionSet(objectType, selectionSet, variableValues, visitedFragments, +parentTarget, newTarget): + +- If {visitedFragments} is not defined, initialize it to the empty set. +- Initialize {targetsByKey} to an empty unordered map of sets. +- Initialize {fieldsByTarget} to an empty unordered map of ordered maps. +- Initialize {newDeferUsages} to an empty list of Defer Usage records. - For each {selection} in {selectionSet}: - If {selection} provides the directive `@skip`, let {skipDirective} be that directive. @@ -669,7 +1401,14 @@ deferredGroupedFieldsList): - If {selection} is a {Field}: - Let {responseKey} be the response key of {selection} (the alias if defined, otherwise the field name). - - Let {groupForResponseKey} be the list in {groupedFields} for + - Let {target} be {newTarget} if {newTarget} is defined; otherwise, let + {target} be {parentTarget}. + - Let {targetsForKey} be the list in {targetsByKey} for {responseKey}; if no + such list exists, create it as an empty set. + - Add {target} to {targetsForKey}. + - Let {fieldsForTarget} be the map in {fieldsByTarget} for {responseKey}; if + no such map exists, create it as an unordered map. + - Let {groupForResponseKey} be the list in {fieldsForTarget} for {responseKey}; if no such list exists, create it as an empty list. - Append {selection} to the {groupForResponseKey}. - If {selection} is a {FragmentSpread}: @@ -695,21 +1434,32 @@ deferredGroupedFieldsList): - If {deferDirective} is defined: - Let {label} be the value or the variable to {deferDirective}'s {label} argument. - - Let {deferredGroupedFields} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments, deferredGroupedFieldsList)}. - - Append a record containing {label} and {deferredGroupedFields} to - {deferredGroupedFieldsList}. - - Continue with the next {selection} in {selectionSet}. - - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments, deferredGroupedFieldsList)}. - - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - - Let {responseKey} be the response key shared by all fields in - {fragmentGroup}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append all items in {fragmentGroup} to {groupForResponseKey}. + - Let {ancestors} be an empty list. + - Append {parentTarget} to {ancestors}. + - If {parentTarget} is defined: + - Let {parentAncestors} be the {ancestor} entry on {parentTarget}. + - Append all items in {parentAncestors} to {ancestors}. + - Let {target} be a new Defer Usage record created from {label} and + {ancestors}. + - Append {target} to {newDeferUsages}. + - Otherwise: + - Let {target} be {newTarget}. + - Let {fragmentTargetByKeys}, {fragmentFieldsByTarget}, + {fragmentNewDeferUsages} be the result of calling + {AnalyzeSelectionSet(objectType, fragmentSelectionSet, variableValues, + visitedFragments, parentTarget, target)}. + - For each {target} and {fragmentMap} in {fragmentFieldsByTarget}: + - Let {mapForTarget} be the ordered map in {fieldsByTarget} for {target}; + if no such map exists, create it as an empty ordered map. + - For each {responseKey} and {fragmentList} in {fragmentMap}: + - Let {listForResponseKey} be the list in {fieldsByTarget} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all items in {fragmentList} to {listForResponseKey}. + - For each {responseKey} and {targetSet} in {fragmentTargetsByKey}: + - Let {setForResponseKey} be the set in {targetsByKey} for {responseKey}; + if no such set exists, create it as the empty set. + - Add all items in {targetSet} to {setForResponseKey}. + - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. - If {selection} is an {InlineFragment}: - Let {fragmentType} be the type condition on {selection}. - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, @@ -725,24 +1475,35 @@ deferredGroupedFieldsList): - If {deferDirective} is defined: - Let {label} be the value or the variable to {deferDirective}'s {label} argument. - - Let {deferredGroupedFields} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments, deferredGroupedFieldsList)}. - - Append a record containing {label} and {deferredGroupedFields} to - {deferredGroupedFieldsList}. - - Continue with the next {selection} in {selectionSet}. - - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments, deferredGroupedFieldsList)}. - - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - - Let {responseKey} be the response key shared by all fields in - {fragmentGroup}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append all items in {fragmentGroup} to {groupForResponseKey}. -- Return {groupedFields}, {deferredGroupedFieldsList} and {visitedFragments}. - -Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` + - Let {ancestors} be an empty list. + - Append {parentTarget} to {ancestors}. + - If {parentTarget} is defined: + - Let {parentAncestors} be {ancestor} on {parentTarget}. + - Append all items in {parentAncestors} to {ancestors}. + - Let {target} be a new Defer Usage record created from {label} and + {ancestors}. + - Append {target} to {newDeferUsages}. + - Otherwise: + - Let {target} be {newTarget}. + - Let {fragmentTargetByKeys}, {fragmentFieldsByTarget}, + {fragmentNewDeferUsages} be the result of calling + {AnalyzeSelectionSet(objectType, fragmentSelectionSet, variableValues, + visitedFragments, parentTarget, target)}. + - For each {target} and {fragmentMap} in {fragmentFieldsByTarget}: + - Let {mapForTarget} be the ordered map in {fieldsByTarget} for {target}; + if no such map exists, create it as an empty ordered map. + - For each {responseKey} and {fragmentList} in {fragmentMap}: + - Let {listForResponseKey} be the list in {fieldsByTarget} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all items in {fragmentList} to {listForResponseKey}. + - For each {responseKey} and {targetSet} in {fragmentTargetsByKey}: + - Let {setForResponseKey} be the set in {targetsByKey} for {responseKey}; + if no such set exists, create it as the empty set. + - Add all items in {targetSet} to {setForResponseKey}. + - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. +- Return {fieldsByTarget}, {targetsByKey}, and {newDeferUsages}. + +Note: The steps in {AnalyzeSelectionSet()} evaluating the `@skip` and `@include` directives may be applied in either order since they apply commutatively. DoesFragmentTypeApply(objectType, fragmentType): @@ -757,56 +1518,152 @@ DoesFragmentTypeApply(objectType, fragmentType): - if {objectType} is a possible type of {fragmentType}, return {true} otherwise return {false}. -#### Async Payload Record - -An Async Payload Record is either a Deferred Fragment Record or a Stream Record. -All Async Payload Records are structures containing: +BuildGroupedFieldSets(fieldsByTarget, targetsByKey, parentTargets) + +- If {parentTargets} is not provided, initialize it to a set containing the + value {undefined}. +- Let {keysWithParentTargets} and {targetSetDetailsMap} be the result of + {GetTargetSetDetails(targetsByKey, parentTargets)}. +- Initialize {remainingFieldsByTarget} to an empty unordered map of ordered + maps. + - For each {target} and {fieldsForTarget} in {fieldsByTarget}: + - Initialize {remainingFieldsForTarget} to an empty ordered map. + - For each {responseKey} and {fieldList} in {fieldsForTarget}: + - Set {responseKey} on {remainingFieldsForTarget} to {fieldList}. +- Initialize {groupedFieldSet} to an empty ordered map. +- If {keysWithParentTargets} is not empty: + - Let {orderedResponseKeys} be the result of + {GetOrderedResponseKeys(parentTargets, remainingFieldsByTarget)}. + - For each {responseKey} in {orderedResponseKeys}: + - If {keysWithParentTargets} does not contain {responseKey}, continue to the + next member of {orderedResponseKeys}. + - Let {fieldGroup} be the Field Group record in {groupedFieldSet} for + {responseKey}; if no such record exists, create a new such record from the + empty list {fields} and the set of {parentTargets}. + - Let {targets} be the entry in {targetsByKeys} for {responseKey}. + - For each {target} in {targets}: + - Let {remainingFieldsForTarget} be the entry in {remainingFieldsByTarget} + for {target}. + - Let {nodes} be the list in {remainingFieldsByTarget} for {responseKey}. + - Remove the entry for {responseKey} from {remainingFieldsByTarget}. + - For each {node} of {nodes}: + - Let {fieldDetails} be a new Field Details record created from {node} + and {target}. + - Append {fieldDetails} to the {fields} entry on {fieldGroup}. +- Initialize {newGroupedFieldSetDetails} to an empty unordered map. +- For each {maskingTargets} and {targetSetDetails} in {targetSetDetailsMap}: + - Initialize {newGroupedFieldSet} to an empty ordered map. + - Let {keys} be the corresponding entry on {targetSetDetails}. + - Let {orderedResponseKeys} be the result of + {GetOrderedResponseKeys(maskingTargets, remainingFieldsByTarget)}. + - For each {responseKey} in {orderedResponseKeys}: + - If {keys} does not contain {responseKey}, continue to the next member of + {orderedResponseKeys}. + - Let {fieldGroup} be the Field Group record in {newGroupedFieldSet} for + {responseKey}; if no such record exists, create a new such record from the + empty list {fields} and the set of {parentTargets}. + - Let {targets} be the entry in {targetsByKeys} for {responseKey}. + - For each {target} in {targets}: + - Let {remainingFieldsForTarget} be the entry in {remainingFieldsByTarget} + for {target}. + - Let {nodes} be the list in {remainingFieldsByTarget} for {responseKey}. + - Remove the entry for {responseKey} from {remainingFieldsByTarget}. + - For each {node} of {nodes}: + - Let {fieldDetails} be a new Field Details record created from {node} + and {target}. + - Append {fieldDetails} to the {fields} entry on {fieldGroup}. + - Let {shouldInitiateDefer} be the corresponding entry on {targetSetDetails}. + - Let {details} be a new Grouped Field Set Details record created from + {newGroupedFieldSet} and {shouldInitiateDefer}. + - Set the entry for {maskingTargets} in {newGroupedFieldSetDetails} to + {details}. +- Return {groupedFieldSet} and {newGroupedFieldSetDetails}. + +Note: entries are always added to Grouped Field Set records in the order in +which they appear for the first target. Field order for deferred grouped field +sets never alters the field order for the parent. + +GetOrderedResponseKeys(targets, fieldsByTarget): + +- Let {firstTarget} be the first entry in {targets}. +- Assert that {firstTarget} is defined. +- Let {firstFields} be the entry for {firstTarget} in {fieldsByTarget}. +- Assert that {firstFields} is defined. +- Let {responseKeys} be the keys of {firstFields}. +- Return {responseKeys}. + +GetTargetSetDetails(targetsByKey, parentTargets): + +- Initialize {keysWithParentTargets} to the empty set. +- Initialize {targetSetDetailsMap} to an empty unordered map. +- For each {responseKey} and {targets} in {targetsByKey}: + - Initialize {maskingTargets} to an empty set. + - For each {target} in {targets}: + - If {target} is not defined: + - Add {target} to {maskingTargets}. + - Continue to the next entry in {targets}. + - Let {ancestors} be the corresponding entry on {target}. + - For each {ancestor} of {ancestors}: + - If {targets} contains {ancestor}, continue to the next member of + {targets}. + - Add {target} to {maskingTargets}. + - If {IsSameSet(maskingTargets, parentTargets)} is {true}: + - Append {responseKey} to {keysWithParentTargets}. + - Continue to the next entry in {targetsByKey}. + - For each {key} in {targetSetDetailsMap}: + - If {IsSameSet(maskingTargets, key)} is {true}, let {targetSetDetails} be + the map in {targetSetDetailsMap} for {maskingTargets}. + - If {targetSetDetails} is defined: + - Let {keys} be the corresponding entry on {targetSetDetails}. + - Add {responseKey} to {keys}. + - Otherwise: + - Initialize {keys} to the empty set. + - Add {responseKey} to {keys}. + - Let {shouldInitiateDefer} be {false}. + - For each {target} in {maskingTargets}: + - If {parentTargets} does not contain {target}: + - Set {shouldInitiateDefer} equal to {true}. + - Create {newTargetSetDetails} as an map containing {keys} and + {shouldInitiateDefer}. + - Set the entry in {targetSetDetailsMap} for {maskingTargets} to + {newTargetSetDetails}. +- Return {keysWithParentTargets} and {targetSetDetailsMap}. + +IsSameSet(setA, setB): + +- If the size of setA is not equal to the size of setB: + - Return {false}. +- For each {item} in {setA}: + - If {setB} does not contain {item}: + - Return {false}. +- Return {true}. + +## Executing Deferred Grouped Field Sets + +ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, +incrementalPublisher, path, newDeferredGroupedFieldSets, deferMap) -- {label}: value derived from the corresponding `@defer` or `@stream` directive. -- {path}: a list of field names and indices from root to the location of the - corresponding `@defer` or `@stream` directive. -- {iterator}: The underlying iterator if created from a `@stream` directive. -- {isCompletedIterator}: a boolean indicating the payload record was generated - from an iterator that has completed. -- {errors}: a list of field errors encountered during execution. -- {dataExecution}: A result that can notify when the corresponding execution has - completed. - -#### Execute Deferred Fragment - -ExecuteDeferredFragment(label, objectType, objectValue, groupedFieldSet, path, -variableValues, parentRecord, subsequentPayloads): - -- Let {deferRecord} be an async payload record created from {label} and {path}. -- Initialize {errors} on {deferRecord} to an empty list. -- Let {dataExecution} be the asynchronous future value of: - - Let {payload} be an unordered map. - - Initialize {resultMap} to an empty ordered map. - - For each {groupedFieldSet} as {responseKey} and {fields}: - - Let {fieldName} be the name of the first entry in {fields}. Note: This - value is unaffected if an alias is used. - - Let {fieldType} be the return type defined for the field {fieldName} of - {objectType}. - - If {fieldType} is defined: - - Let {responseValue} be {ExecuteField(objectType, objectValue, fieldType, - fields, variableValues, path, subsequentPayloads, asyncRecord)}. - - Set {responseValue} as the value for {responseKey} in {resultMap}. - - Append any encountered field errors to {errors}. - - If {parentRecord} is defined: - - Wait for the result of {dataExecution} on {parentRecord}. - - If {errors} is not empty: - - Add an entry to {payload} named `errors` with the value {errors}. - - If a field error was raised, causing a {null} to be propagated to - {responseValue}: - - Add an entry to {payload} named `data` with the value {null}. +- If {path} is not provided, initialize it to an empty list. +- For each {deferredGroupedFieldSet} of {newDeferredGroupedFieldSets}: + - Let {shouldInitiateDefer} and {groupedFieldSet} be the corresponding entries + on {deferredGroupedFieldSet}. + - If {shouldInitiateDefer} is {true}: + - Initiate implementation specific deferral of further execution, resuming + execution as defined. + - Let {data} be the result of calling {ExecuteGroupedFieldSet(groupedFieldSet, + objectType, objectValue, variableValues, path, deferMap, + incrementalPublisher, deferredGroupedFieldSet)}. + - Let {eventQueue} be the corresponding entry on {incrementalPublisher}. + - Let {id} be the corresponding entry on {deferredGroupedFieldSet}. + - If _field error_ were raised, causing a {null} to be propagated to {data}: + - Let {incrementalErrors} be the list of such field errors. + - Enqueue an Errored Deferred Grouped Field Set event with details {id} and + {incrementalErrors}. - Otherwise: - - Add an entry to {payload} named `data` with the value {resultMap}. - - If {label} is defined: - - Add an entry to {payload} named `label` with the value {label}. - - Add an entry to {payload} named `path` with the value {path}. - - Return {payload}. -- Set {dataExecution} on {deferredFragmentRecord}. -- Append {deferRecord} to {subsequentPayloads}. + - Let {errors} be the list of all _field error_ raised while executing the + {groupedFieldSet}. + - Enqueue a Completed Deferred Grouped Field Set event with details {id}, + {data}, and {errors}. ## Executing Fields @@ -816,19 +1673,19 @@ coerces any provided argument values, then resolves a value for the field, and finally completes that value either by recursively executing another selection set or coercing a scalar value. -ExecuteField(objectType, objectValue, fieldType, fields, variableValues, path, -subsequentPayloads, asyncRecord): +ExecuteField(objectType, objectValue, fieldType, fieldGroup, variableValues, +path, deferMap, incrementalPublisher, incrementalDataRecord): -- Let {field} be the first entry in {fields}. -- Let {fieldName} be the field name of {field}. +- Let {fieldDetails} be the first entry in {fieldGroup}. +- Let {node} be the corresponding entry on {fieldDetails}. +- Let {fieldName} be the field name of {node}. - Append {fieldName} to {path}. - Let {argumentValues} be the result of {CoerceArgumentValues(objectType, field, variableValues)} - Let {resolvedValue} be {ResolveFieldValue(objectType, objectValue, fieldName, argumentValues)}. -- Let {result} be the result of calling {CompleteValue(fieldType, fields, - resolvedValue, variableValues, path, subsequentPayloads, asyncRecord)}. -- Return {result}. +- Return the result of {CompleteValue(fieldType, fields, resolvedValue, + variableValues, path, deferMap, incrementalPublisher, incrementalDataRecord)}. ### Coercing Field Arguments @@ -930,54 +1787,77 @@ yielded items satisfies `initialCount` specified on the `@stream` directive. #### Execute Stream Field -ExecuteStreamField(label, iterator, index, fields, innerType, path, -parentRecord, variableValues, subsequentPayloads): - -- Let {streamRecord} be an async payload record created from {label}, {path}, - and {iterator}. -- Initialize {errors} on {streamRecord} to an empty list. -- Let {itemPath} be {path} with {index} appended. -- Let {dataExecution} be the asynchronous future value of: - - Wait for the next item from {iterator}. - - If an item is not retrieved because {iterator} has completed: - - Set {isCompletedIterator} to {true} on {streamRecord}. - - Return {null}. - - Let {payload} be an unordered map. - - If an item is not retrieved because of an error: - - Append the encountered error to {errors}. - - Add an entry to {payload} named `items` with the value {null}. +ExecuteStreamField(stream, index, innerType, variableValues, +incrementalPublisher, parentIncrementalDataRecord): + +- Let {path} and {iterator} be the corresponding entries on {stream}. +- Let {incrementalErrors} be an empty list of _field error_ for the entire + stream, including all _field error_ bubbling up to {path}. +- Let {currentIndex} be {index}. +- Let {currentParent} be {parentIncrementalDataRecord}. +- Let {errored} be {false}. +- Let {eventQueue} be the corresponding entry on {incrementalPublisher}. +- Repeat the following steps: + - Let {itemPath} be {path} with {currentIndex} appended. + - Let {streamItems} be a new Stream Items Record. + - Let {id} be the corresponding entry on {streamItems}. + - Let {parentIds} be an empty list. + - If {currentParent} is a Deferred Grouped Field Set Record. + - Let {deferredFragments} be the corresponding entry on {currentParent}. + - For each {deferredFragment} in {deferredFragments}: + - Let {fragmentId} be the entry for {id} on {deferredFragments}. + - Append {fragmentId} to {parentIds}. - Otherwise: - - Let {item} be the item retrieved from {iterator}. - - Let {data} be the result of calling {CompleteValue(innerType, fields, - item, variableValues, itemPath, subsequentPayloads, parentRecord)}. - - Append any encountered field errors to {errors}. - - Increment {index}. - - Call {ExecuteStreamField(label, iterator, index, fields, innerType, path, - streamRecord, variableValues, subsequentPayloads)}. - - If a field error was raised, causing a {null} to be propagated to {data}, - and {innerType} is a Non-Nullable type: - - Add an entry to {payload} named `items` with the value {null}. - - Otherwise: - - Add an entry to {payload} named `items` with a list containing the value - {data}. - - If {errors} is not empty: - - Add an entry to {payload} named `errors` with the value {errors}. - - If {label} is defined: - - Add an entry to {payload} named `label` with the value {label}. - - Add an entry to {payload} named `path` with the value {itemPath}. - - If {parentRecord} is defined: - - Wait for the result of {dataExecution} on {parentRecord}. - - Return {payload}. -- Set {dataExecution} on {streamRecord}. -- Append {streamRecord} to {subsequentPayloads}. - -CompleteValue(fieldType, fields, result, variableValues, path, -subsequentPayloads, asyncRecord): + - Let {id} be the corresponding entry on {currentParent}. + - Append {id} to {parentIds}. + - Let {streamId} be the entry for {id} on {stream}. + - Enqueue a New Stream Items Event on {eventQueue} with details {id}, + {streamId}, and {parentIds}. + - Wait for the next item from {result} via the {iterator}. + - If {errored} is {true}: + - Return. + - If an item is not retrieved because of an error: + - Let {error} be that error. + - Initialize {incrementalErrors} to a list containing {error}. + - Enqueue an Errored Stream Items Event on {eventQueue} with details {id} + and {incrementalErrors}. + - Return. + - If an item is not retrieved because {iterator} has completed: + - Let {id} be the corresponding entry on {streamItems} + - Enqueue a Completed Empty Stream Items Event on {eventQueue} with details + {id}. + - Return. + - Let {item} be the item retrieved from {iterator}. + - Let {streamFieldGroup} be the corresponding entry on {stream}. + - Let {newDeferMap} be an empty unordered map. + - Let {data} be the result of calling {CompleteValue(innerType, + streamedFieldGroup, item, variableValues, itemPath, newDeferMap, + incrementalPublisher, currentParent)}. + - If a field error was raised, causing a {null} to be propagated to {data} and + {innerType} is a Non-Nullable type, let {incrementalErrors} be the list of + those errors: + - Set {errored} to {true}. + - Let {id} be the corresponding entry on {streamItems} + - Enqueue an Errored Stream Items Event on {eventQueue} with details {id} + and {incrementalErrors}. + - Return. + - Let {errors} be the list of all _field error_ raised while completing this + item. + - Initialize {items} to an list containing the single item {data}. + - Let {id} be the corresponding entry on {streamItems} + - Enqueue a Completed Stream Items Event on {eventQueue} with details {id}, + {items}, and {errors}. + - Increment {currentIndex}. + - Set {currentParent} to {streamItems}. + - Increment {index}. + +CompleteValue(fieldType, fieldGroup, result, variableValues, path, deferMap, +incrementalPublisher, incrementalDataRecord): - If the {fieldType} is a Non-Null type: - Let {innerType} be the inner type of {fieldType}. - Let {completedResult} be the result of calling {CompleteValue(innerType, - fields, result, variableValues, path)}. + fieldGroup, result, variableValues, path)}. - If {completedResult} is {null}, raise a _field error_. - Return {completedResult}. - If {result} is {null} (or another internal value similar to {null} such as @@ -1003,8 +1883,17 @@ subsequentPayloads, asyncRecord): - While {result} is not closed: - If {streamDirective} is defined and {index} is greater than or equal to {initialCount}: - - Call {ExecuteStreamField(label, iterator, index, fields, innerType, - path, asyncRecord, subsequentPayloads)}. + - Let {streamFieldGroup} be the result of + {GetStreamFieldGroup(fieldGroup)}. + - Let {stream} be a new Stream Record created from {streamFieldGroup}, and + {iterator}. + - Let {id} be the corresponding entry on {stream}. + - Let {earlyReturn} be the implementation-specific value denoting how to + notify {iterator} that no additional items will be requested. + - Enqueue a New Stream Event on {eventQueue} with details {id}, {label}, + {path}, and {earlyReturn}. + - Call {ExecuteStreamField(stream, index, innerType, variableValues, + incrementalPublisher, incrementalDataRecord)}. - Return {items}. - Otherwise: - Wait for the next item from {result} via the {iterator}. @@ -1012,8 +1901,8 @@ subsequentPayloads, asyncRecord): - Let {resultItem} be the item retrieved from {result}. - Let {itemPath} be {path} with {index} appended. - Let {resolvedItem} be the result of calling {CompleteValue(innerType, - fields, resultItem, variableValues, itemPath, subsequentPayloads, - asyncRecord)}. + fields, resultItem, variableValues, itemPath, deferMap, + incrementalPublisher, incrementalDataRecord)}. - Append {resolvedItem} to {items}. - Increment {index}. - Return {items}. @@ -1024,10 +1913,67 @@ subsequentPayloads, asyncRecord): - Let {objectType} be {fieldType}. - Otherwise if {fieldType} is an Interface or Union type. - Let {objectType} be {ResolveAbstractType(fieldType, result)}. - - Let {subSelectionSet} be the result of calling {MergeSelectionSets(fields)}. - - Return the result of evaluating {ExecuteSelectionSet(subSelectionSet, - objectType, result, variableValues, path, subsequentPayloads, asyncRecord)} - _normally_ (allowing for parallelization). + - Let {groupedFieldSet}, {newGroupedFieldSetDetails}, and {deferUsages} be the + result of {ProcessSubSelectionSets(objectType, fieldGroup, variableValues)}. + - Let {newDeferMap} be the result of + {AddNewDeferFragments(incrementalPublisher, newDeferUsages, + incrementalDataRecord, deferMap, path)}. + - Let {newDeferredGroupedFieldSets} be the result of + {AddNewDeferredGroupedFieldSets(incrementalPublisher, + newGroupedFieldSetDetails, newDeferMap, path)}. + - Let {completed} be the result of evaluating + {ExecuteGroupedFieldSet(groupedFieldSet, objectType, result, variableValues, + path, newDeferMap, incrementalPublisher, incrementalDataRecord)} _normally_ + (allowing for parallelization). + - In parallel, call {ExecuteDeferredGroupedFieldSets(objectType, result, + variableValues, incrementalPublisher, newDeferredGroupedFieldSets, + newDeferredFragments, newDeferMap)}. + - Return {completed}. + +ProcessSubSelectionSets(objectType, fieldGroup, variableValues): + +- Initialize {targetsByKey} to an empty unordered map of sets. +- Initialize {fieldsByTarget} to an empty unordered map of ordered maps. +- Initialize {newDeferUsages} to an empty list. +- Let {fields} and {targets} be the corresponding entries on {fieldGroup}. +- For each {fieldDetails} within {fields}: + - Let {node} and {target} be the corresponding entries on {fieldDetails}. + - Let {fieldSelectionSet} be the selection set of {fieldNode}. + - If {fieldSelectionSet} is null or empty, continue to the next field. + - Let {subfieldsFieldsByTarget}, {subfieldTargetsByKey}, and + {subfieldNewDeferUsages} be the result of calling + {AnalyzeSelectionSet(objectType, fieldSelectionSet, variableValues, + visitedFragments, target)}. + - For each {target} and {subfieldMap} in {subfieldFieldsByTarget}: + - Let {mapForTarget} be the ordered map in {fieldsByTarget} for {target}; + if no such map exists, create it as an empty ordered map. + - For each {responseKey} and {subfieldList} in {subfieldMap}: + - Let {listForResponseKey} be the list in {fieldsByTarget} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all items in {subfieldList} to {listForResponseKey}. + - For each {responseKey} and {targetSet} in {subfieldTargetsByKey}: + - Let {setForResponseKey} be the set in {targetsByKey} for {responseKey}; + if no such set exists, create it as the empty set. + - Add all items in {targetSet} to {setForResponseKey}. + - Append all items in {subfieldNewDeferUsages} to {newDeferUsages}. +- Let {parentTargets} be the corresponding entry on {fieldGroup}. +- Let {groupedFieldSet} and {newGroupedFieldSetDetails} be the result of calling + {BuildGroupedFieldSets(fieldsByTarget, targetsByKey, parentTargets)}. +- Return {groupedFieldSet}, {newGroupedFieldSetDetails}, and {newDeferUsages}. + +GetStreamFieldGroup(fieldGroup): + +- Let {streamFields} be an empty list. +- Let {fields} be the corresponding entry on {fieldGroup}. +- For each {fieldDetails} in {fields}: + - Let {node} be the corresponding entry on {fieldDetails}. + - Let {newFieldDetails} be a new Field Details record created from {node} and + {undefined}. + - Append {newFieldDetails} to {streamFields}. +- Let {targets} be a set containing the value {undefined}. +- Let {streamFieldGroup} be a new Field Group record created from {streamFields} + and {targets}. +- Return {streamFieldGroup}. **Coercing Results** @@ -1090,17 +2036,9 @@ sub-selections. } ``` -After resolving the value for `me`, the selection sets are merged together so -`firstName` and `lastName` can be resolved for one value. - -MergeSelectionSets(fields): - -- Let {selectionSet} be an empty list. -- For each {field} in {fields}: - - Let {fieldSelectionSet} be the selection set of {field}. - - If {fieldSelectionSet} is null or empty, continue to the next field. - - Append all selections in {fieldSelectionSet} to {selectionSet}. -- Return {selectionSet}. +After resolving the value for `me`, the selection sets are merged together by +calling {ProcessSubSelectionSets()} so `firstName` and `lastName` can be +resolved for one value. ### Handling Field Errors @@ -1135,15 +2073,17 @@ resolves to {null}, then the entire list must resolve to {null}. If the `List` type is also wrapped in a `Non-Null`, the field error continues to propagate upwards. -When a field error is raised inside `ExecuteDeferredFragment` or +When a field error is raised inside `ExecuteDeferredGroupedFieldSets` or `ExecuteStreamField`, the defer and stream payloads act as error boundaries. That is, the null resulting from a `Non-Null` type cannot propagate outside of the boundary of the defer or stream payload. If a field error is raised while executing the selection set of a fragment with the `defer` directive, causing a {null} to propagate to the object containing -this fragment, the {null} should not propagate any further. In this case, the -associated Defer Payload's `data` field must be set to {null}. +this fragment, the {null} should not be sent to the client, as this will +overwrite existing data. In this case, the associated Defer Payload's +`completed` entry must include the causative errors, whose presence indicated +the failure of the payload to be included within the final reconcilable object. For example, assume the `month` field is a `Non-Null` type that raises a field error: @@ -1166,21 +2106,24 @@ Response 1, the initial response is sent: ```json example { "data": { "birthday": {} }, + "pending": [ + { "path": ["birthday"], "label": "monthDefer" } + { "path": ["birthday"], "label": "yearDefer" } + ], "hasNext": true } ``` -Response 2, the defer payload for label "monthDefer" is sent. The {data} entry -has been set to {null}, as this {null} as propagated as high as the error -boundary will allow. +Response 2, the defer payload for label "monthDefer" is completed with errors. +Incremental data cannot be sent, as this would overwrite previously sent values. ```json example { - "incremental": [ + "completed": [ { "path": ["birthday"], "label": "monthDefer", - "data": null + "errors": [...] } ], "hasNext": false @@ -1195,21 +2138,27 @@ payload is unaffected by the previous null error. "incremental": [ { "path": ["birthday"], - "label": "yearDefer", "data": { "year": "2022" } } ], + "completed": [ + { + "path": ["birthday"], + "label": "yearDefer" + } + ], "hasNext": false } ``` If the `stream` directive is present on a list field with a Non-Nullable inner type, and a field error has caused a {null} to propagate to the list item, the -{null} should not propagate any further, and the associated Stream Payload's -`item` field must be set to {null}. - -For example, assume the `films` field is a `List` type with an `Non-Null` inner -type. In this case, the second list item raises a field error: +{null} similarly should not be sent to the client, as this will overwrite +existing data. In this case, the associated Stream's `completed` entry must +include the causative errors, whose presence indicated the failure of the stream +to complete successfully. For example, assume the `films` field is a `List` type +with an `Non-Null` inner type. In this case, the second list item raises a field +error: ```graphql example { @@ -1222,19 +2171,20 @@ Response 1, the initial response is sent: ```json example { "data": { "films": ["A New Hope"] }, + "pending": [{ "path": ["films"] }], "hasNext": true } ``` -Response 2, the first stream payload is sent. The {items} entry has been set to -{null}, as this {null} as propagated as high as the error boundary will allow. +Response 2, the stream is completed with errors. Incremental data cannot be +sent, as this would overwrite previously sent values. ```json example { - "incremental": [ + "completed": [ { - "path": ["films", 1], - "items": null + "path": ["films"], + "errors": [...], } ], "hasNext": false @@ -1260,19 +2210,20 @@ Response 1, the initial response is sent: } ``` -Response 2, the first stream payload is sent. The {items} entry has been set to -a list containing {null}, as this {null} has only propagated as high as the list -item. +Response 2, the first stream payload is sent; the stream is not completed. The +{items} entry has been set to a list containing {null}, as this {null} has only +propagated as high as the list item. ```json example { "incremental": [ { "path": ["films", 1], - "items": [null] + "items": [null], + "errors": [...], } ], - "hasNext": false + "hasNext": true } ``` diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index db50408fa..0bea49561 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -264,11 +264,19 @@ discouraged. } ``` -### Incremental +### Incremental Delivery -The `incremental` entry in the response is a non-empty list of Defer or Stream -payloads. If the response of the GraphQL operation is a response stream, this -field may appear on both the initial and subsequent values. +The `pending` entry in the response is a non-empty list of references to pending +Defer or Stream results. If the response of the GraphQL operation is a response +stream, this field should appear on the initial and possibly subsequent +payloads. + +The `incremental` entry in the response is a non-empty list of data fulfilling +Defer or Stream results. If the response of the GraphQL operation is a response +stream, this field may appear on the subsequent payloads. + +The `completed` entry in the response is a non-empty list of references to +completed Defer or Stream results. If errors are For example, a query containing both defer and stream: @@ -302,6 +310,10 @@ results. "films": [{ "title": "A New Hope" }] } }, + "pending": [ + { "path": ["person"], "label": "homeWorldDefer" }, + { "path": ["person", "films"], "label": "filmStream" } + ], "hasNext": true } ``` @@ -312,16 +324,15 @@ Response 2, contains the defer payload and the first stream payload. { "incremental": [ { - "label": "homeWorldDefer", "path": ["person"], "data": { "homeWorld": { "name": "Tatooine" } } }, { - "label": "filmsStream", - "path": ["person", "films", 1], + "path": ["person", "films"], "items": [{ "title": "The Empire Strikes Back" }] } ], + "completed": [{ "path": ["person"], "label": "homeWorldDefer" }], "hasNext": true } ``` @@ -335,8 +346,7 @@ would be the final response. { "incremental": [ { - "label": "filmsStream", - "path": ["person", "films", 2], + "path": ["person", "films"], "items": [{ "title": "Return of the Jedi" }] } ], @@ -350,39 +360,40 @@ iterator of the `films` field closes. ```json example { + "completed": [{ "path": ["person", "films"], "label": "filmStream" }], "hasNext": false } ``` -#### Stream payload +#### Streamed data -A stream payload is a map that may appear as an item in the `incremental` entry -of a response. A stream payload is the result of an associated `@stream` -directive in the operation. A stream payload must contain `items` and `path` -entries and may contain `label`, `errors`, and `extensions` entries. +Streamed data may appear as an item in the `incremental` entry of a response. +Streamed data is the result of an associated `@stream` directive in the +operation. A stream payload must contain `items` and `path` entries and may +contain `errors`, and `extensions` entries. ##### Items The `items` entry in a stream payload is a list of results from the execution of the associated @stream directive. This output will be a list of the same type of -the field with the associated `@stream` directive. If `items` is set to `null`, -it indicates that an error has caused a `null` to bubble up to a field higher -than the list field with the associated `@stream` directive. +the field with the associated `@stream` directive. If an error has caused a +`null` to bubble up to a field higher than the list field with the associated +`@stream` directive, then the stream will complete with errors. -#### Defer payload +#### Deferred data -A defer payload is a map that may appear as an item in the `incremental` entry -of a response. A defer payload is the result of an associated `@defer` directive -in the operation. A defer payload must contain `data` and `path` entries and may -contain `label`, `errors`, and `extensions` entries. +Deferred data is a map that may appear as an item in the `incremental` entry of +a response. Deferred data is the result of an associated `@defer` directive in +the operation. A defer payload must contain `data` and `path` entries and may +contain `errors`, and `extensions` entries. ##### Data The `data` entry in a Defer payload will be of the type of a particular field in the GraphQL result. The adjacent `path` field will contain the path segments of -the field this data is associated with. If `data` is set to `null`, it indicates -that an error has caused a `null` to bubble up to a field higher than the field -that contains the fragment with the associated `@defer` directive. +the field this data is associated with. If an error has caused a `null` to +bubble up to a field higher than the field that contains the fragment with the +associated `@defer` directive, then the fragment will complete with errors. #### Path From 831b10ce57e05b732c159df4652089b0367e28ea Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 26 Sep 2023 18:33:35 +0300 Subject: [PATCH 64/65] scattered fixes, streamlining --- spec/Section 6 -- Execution.md | 150 +++++++++++++++------------------ 1 file changed, 68 insertions(+), 82 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 1370bcf0b..0665502d6 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -426,11 +426,6 @@ A Stream Record is a structure that always contains: - {id}: an implementation-specific value uniquely identifying this record, created if not provided. -Within the Execution context, records of this type also include: - -- {streamFieldGroup}: A Field Group record for completing stream items. -- {iterator}: The underlying iterator. - Within the Incremental Publisher context, records of this type also include: - {label}: value derived from the corresponding `@stream` directive. @@ -453,12 +448,6 @@ A Deferred Grouped Field Set Record is a structure that always contains: - {id}: an implementation-specific value uniquely identifying this record, created if not provided. -Within the Execution context, records of this type also include: - -- {groupedFieldSet}: a Grouped Field Set to execute. -- {shouldInitiateDefer}: a boolean value indicating whether implementation - specific deferral of execution should be initiated. - Within the Incremental Publisher context, records of this type also include: - {path}: a list of field names and indices from root to the location of this @@ -479,7 +468,7 @@ a unit of Incremental Data as well as an Incremental Result. #### New Deferred Fragment Event -Required event details for New Deferred Fragment Events include: +Required event details include: - {id}: string value identifying this Deferred Fragment. - {label}: value derived from the corresponding `@defer` directive. @@ -490,7 +479,7 @@ Required event details for New Deferred Fragment Events include: #### New Deferred Grouped Field Set Event -Required event details for New Deferred Grouped Field Set Event include: +Required event details include: - {id}: string value identifying this Deferred Grouped Field Set. - {path}: a list of field names and indices from root to the location of this @@ -500,7 +489,7 @@ Required event details for New Deferred Grouped Field Set Event include: #### Completed Deferred Grouped Field Set Event -Required event details for Completed Deferred Grouped Field Set Events include: +Required event details include: - {id}: string value identifying this Deferred Grouped Field Set. - {data}: ordered map represented the completed data for this Deferred Grouped @@ -509,7 +498,7 @@ Required event details for Completed Deferred Grouped Field Set Events include: #### Errored Deferred Grouped Field Set Event -Required event details for Errored Deferred Grouped Field Set Event include: +Required event details include: - {id}: string value identifying this Deferred Grouped Field Set. - {errors}: The _field error_ causing the entire Deferred Grouped Field Set to @@ -517,7 +506,7 @@ Required event details for Errored Deferred Grouped Field Set Event include: #### New Stream Event -Required event details for New Stream Events include: +Required event details include: - {id}: string value identifying this Stream. - {label}: value derived from the corresponding `@stream` directive. @@ -528,7 +517,7 @@ Required event details for New Stream Events include: #### New Stream Items Event -Required event details for New Stream Items Event include: +Required event details include: - {id}: string value identifying these Stream Items. - {streamId}: string value identifying the Stream @@ -537,7 +526,7 @@ Required event details for New Stream Items Event include: #### Completed Stream Items Event -Required event details for Completed Stream Items Event include: +Required event details include: - {id}: string value identifying these Stream Items. - {items}: the list of items. @@ -545,26 +534,26 @@ Required event details for Completed Stream Items Event include: #### Completed Empty Stream Items Event -Required event details for Completed Empty Stream Items Events include: +Required event details include: - {id}: string value identifying these Stream Items. #### Errored Stream Items Event -Required event details for Errored Stream Items Events include: +Required event details include: - {id}: string value identifying these Stream Items. - {errors}: the _field error_ causing these items to error. #### Completed Initial Result Event -Required event details for Completed Initial Result Events include: +Required event details include: - {id}: string value identifying this Initial Result. #### Field Error Event -Required event details for Field Error Events include: +Required event details include: - {id}: string value identifying the Initial Result, Deferred Grouped Field Set or Stream Items from which the _field error_ originates. @@ -706,7 +695,7 @@ CreateIncrementalPublisher(): {earlyReturn}. - Set the entry for {id} on {streamMap} to {stream}. -- Define the sub-procedure {HandleNewStreamItemsEvent(id, streamIds, parentIds)} +- Define the sub-procedure {HandleNewStreamItemsEvent(id, streamId, parentIds)} as follows: - Let {stream} be the entry in {streamMap} for {streamId}. @@ -1031,22 +1020,21 @@ serial): - Let {fieldsByTarget}, {targetsByKey}, and {newDeferUsages} be the result of calling {AnalyzeSelectionSet(objectType, selectionSet, variableValues)}. -- Let {groupedFieldSet}, {newGroupedFieldSetDetails} be the result of calling +- Let {groupedFieldSet} and {groupDetailsMap} be the result of calling {BuildGroupedFieldSets(fieldsByTarget, targetsByKey)}. - Let {incrementalPublisher} be the result of {CreateIncrementalPublisher()}. -- Let {newDeferMap} be the result of {AddNewDeferFragments(incrementalPublisher, - newDeferUsages, incrementalDataRecord)}. -- Let {newDeferredGroupedFieldSets} be the result of - {AddNewDeferredGroupedFieldSets(incrementalPublisher, - newGroupedFieldSetDetails, newDeferMap)}. - Let {initialResultRecord} be a new Initial Result Record. +- Let {newDeferMap} be the result of {AddNewDeferFragments(incrementalPublisher, + newDeferUsages, initialResultRecord)}. +- Let {detailsList} be the result of + {AddNewDeferredGroupedFieldSets(incrementalPublisher, groupDetailsMap, + newDeferMap)}. - Let {data} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, queryType, initialValue, variableValues, incrementalPublisher, initialResultRecord)} _serially_ if {serial} is {true}, _normally_ (allowing parallelization) otherwise. - In parallel, call {ExecuteDeferredGroupedFieldSets(queryType, initialValues, - variableValues, incrementalPublisher, newDeferredGroupedFieldSets, - newDeferMap)}. + variableValues, incrementalPublisher, detailsList, newDeferMap)}. - Let {id} be the corresponding entry on {initialResultRecord}. - Let {errors} be the list of all _field error_ raised while executing the {groupedFieldSet}. @@ -1054,8 +1042,9 @@ serial): - If {errors} is not empty: - Set the corresponding entry on {initialResult} to {errors}. - Set {data} on {initialResult} to {data}. +- Let {eventQueue} and {pending} be the corresponding entries on + {incrementalPublisher}. - Enqueue a Completed Initial Result Event on {eventQueue} with {id}. -- Let {pending} be the corresponding entry on {incrementalPublisher}. - Wait for {pending} to be set. - If {pending} is empty, return {initialResult}. - Let {hasNext} be {true}. @@ -1077,8 +1066,7 @@ incrementalDataRecord, deferMap, path): - Let {eventQueue} be the corresponding entry on {incrementalPublisher}. - For each {deferUsage} in {newDeferUsages}: - Let {label} be the corresponding entry on {deferUsage}. - - Let {parent} be (GetParentTarget(deferUsage, deferMap, - incrementalDataRecord)). + - Let {parent} be (GetParent(deferUsage, deferMap, incrementalDataRecord)). - Let {parentId} be the entry for {id} on {parent}. - Let {deferredFragment} be a new Deferred Fragment Record. - Let {id} be the corresponding entry on {deferredFragment}. @@ -1087,37 +1075,39 @@ incrementalDataRecord, deferMap, path): - Set the entry for {deferUsage} in {newDeferMap} to {deferredFragment}. - Return {newDeferMap}. -GetParentTarget(deferUsage, deferMap, incrementalDataRecord): +GetParent(deferUsage, deferMap, incrementalDataRecord): - Let {ancestors} be the corresponding entry on {deferUsage}. - Let {parentDeferUsage} be the first member of {ancestors}. - If {parentDeferUsage} is not defined, return {incrementalDataRecord}. -- Let {parentRecord} be the corresponding entry in {deferMap} for - {parentDeferUsage}. -- Return {parentRecord}. +- Let {parent} be the corresponding entry in {deferMap} for {parentDeferUsage}. +- Return {parent}. -AddNewDeferredGroupedFieldSets(incrementalPublisher, newGroupedFieldSetDetails, -deferMap, path): +AddNewDeferredGroupedFieldSets(incrementalPublisher, groupDetailsMap, deferMap, +path): -- Initialize {newDeferredGroupedFieldSets} to an empty list. -- For each {deferUsageSet} and {groupedFieldSetDetails} in - {newGroupedFieldSetDetails}: +- Initialize {detailsList} to an empty list. +- For each {deferUsageSet} and {details} in {groupDetailsMap}: - Let {groupedFieldSet} and {shouldInitiateDefer} be the corresponding entries - on {groupedFieldSetDetails}. - - Let {deferredGroupedFieldSet} be a new Deferred Grouped Field Set Record - created from {groupedFieldSet} and {shouldInitiateDefer}. + on {details}. + - Let {deferredGroupedFieldSetRecord} be a new Deferred Grouped Field Set + Record. + - Initialize {recordDetails} to an empty unordered map. + - Set the corresponding entries on {recordDetails} to + {deferredGroupedFieldSetRecord}, {groupedFieldSet}, and + {shouldInitiateDefer}. - Let {deferredFragments} be the result of {GetDeferredFragments(deferUsageSet, newDeferMap)}. - Let {fragmentIds} be an empty list. - For each {deferredFragment} in {deferredFragments}: - Let {id} be the corresponding entry on {deferredFragment}. - Append {id} to {fragmentIds}. - - Let {id} be the corresponding entry on {deferredGroupedFieldSet}. + - Let {id} be the corresponding entry on {deferredGroupedFieldSetRecord}. - Let {eventQueue} be the corresponding entry on {incrementalPublisher}. - Enqueue a New Deferred Grouped Field Set Event on {eventQueue} with details {id}, {path}, and {fragmentIds}. - - Append {deferredGroupedFieldSet} to {newDeferredGroupedFieldSets}. -- Return {newDeferredGroupedFieldSets}. + - Append {recordDetails} to {detailsList}. +- Return {detailsList}. GetDeferredFragments(deferUsageSet, deferMap): @@ -1366,8 +1356,8 @@ A Field Details record is a structure containing: - {target}: the Defer Usage record corresponding to the deferred fragment enclosing this field or the value {undefined} if the field was not deferred. -Additional deferred grouped field sets are returned as Grouped Field Set Details -records which are structures containing: +Information about additional deferred grouped field sets are returned as a list +of Grouped Field Set Details structures containing: - {groupedFieldSet}: the grouped field set itself. - {shouldInitiateDefer}: a boolean value indicating whether the executor should @@ -1444,7 +1434,7 @@ parentTarget, newTarget): - Append {target} to {newDeferUsages}. - Otherwise: - Let {target} be {newTarget}. - - Let {fragmentTargetByKeys}, {fragmentFieldsByTarget}, + - Let {fragmentTargetsByKey}, {fragmentFieldsByTarget}, {fragmentNewDeferUsages} be the result of calling {AnalyzeSelectionSet(objectType, fragmentSelectionSet, variableValues, visitedFragments, parentTarget, target)}. @@ -1485,7 +1475,7 @@ parentTarget, newTarget): - Append {target} to {newDeferUsages}. - Otherwise: - Let {target} be {newTarget}. - - Let {fragmentTargetByKeys}, {fragmentFieldsByTarget}, + - Let {fragmentTargetsByKey}, {fragmentFieldsByTarget}, {fragmentNewDeferUsages} be the result of calling {AnalyzeSelectionSet(objectType, fragmentSelectionSet, variableValues, visitedFragments, parentTarget, target)}. @@ -1550,7 +1540,7 @@ BuildGroupedFieldSets(fieldsByTarget, targetsByKey, parentTargets) - Let {fieldDetails} be a new Field Details record created from {node} and {target}. - Append {fieldDetails} to the {fields} entry on {fieldGroup}. -- Initialize {newGroupedFieldSetDetails} to an empty unordered map. +- Initialize {groupDetailsMap} to an empty unordered map. - For each {maskingTargets} and {targetSetDetails} in {targetSetDetailsMap}: - Initialize {newGroupedFieldSet} to an empty ordered map. - Let {keys} be the corresponding entry on {targetSetDetails}. @@ -1573,11 +1563,11 @@ BuildGroupedFieldSets(fieldsByTarget, targetsByKey, parentTargets) and {target}. - Append {fieldDetails} to the {fields} entry on {fieldGroup}. - Let {shouldInitiateDefer} be the corresponding entry on {targetSetDetails}. - - Let {details} be a new Grouped Field Set Details record created from - {newGroupedFieldSet} and {shouldInitiateDefer}. - - Set the entry for {maskingTargets} in {newGroupedFieldSetDetails} to - {details}. -- Return {groupedFieldSet} and {newGroupedFieldSetDetails}. + - Initialize {details} to an empty unordered map. + - Set the entry for {groupedFieldSet} in {details} to {newGroupedFieldSet}. + - Set the corresponding entry in {details} to {shouldInitiateDefer}. + - Set the entry for {maskingTargets} in {groupDetailsMap} to {details}. +- Return {groupedFieldSet} and {groupDetailsMap}. Note: entries are always added to Grouped Field Set records in the order in which they appear for the first target. Field order for deferred grouped field @@ -1641,12 +1631,11 @@ IsSameSet(setA, setB): ## Executing Deferred Grouped Field Sets ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, -incrementalPublisher, path, newDeferredGroupedFieldSets, deferMap) +incrementalPublisher, path, detailsList, deferMap) -- If {path} is not provided, initialize it to an empty list. -- For each {deferredGroupedFieldSet} of {newDeferredGroupedFieldSets}: - - Let {shouldInitiateDefer} and {groupedFieldSet} be the corresponding entries - on {deferredGroupedFieldSet}. +- For each {recordDetails} in {detailsList}, allowing for parallelization: + - Let {deferredGroupedFieldSetRecord}, {groupedFieldSet}, and + {shouldInitiateDefer} be the corresponding entries on {recordDetails}. - If {shouldInitiateDefer} is {true}: - Initiate implementation specific deferral of further execution, resuming execution as defined. @@ -1654,7 +1643,7 @@ incrementalPublisher, path, newDeferredGroupedFieldSets, deferMap) objectType, objectValue, variableValues, path, deferMap, incrementalPublisher, deferredGroupedFieldSet)}. - Let {eventQueue} be the corresponding entry on {incrementalPublisher}. - - Let {id} be the corresponding entry on {deferredGroupedFieldSet}. + - Let {id} be the corresponding entry on {deferredGroupedFieldSetRecord}. - If _field error_ were raised, causing a {null} to be propagated to {data}: - Let {incrementalErrors} be the list of such field errors. - Enqueue an Errored Deferred Grouped Field Set event with details {id} and @@ -1787,16 +1776,16 @@ yielded items satisfies `initialCount` specified on the `@stream` directive. #### Execute Stream Field -ExecuteStreamField(stream, index, innerType, variableValues, -incrementalPublisher, parentIncrementalDataRecord): +ExecuteStreamField(stream, path, iterator, fieldGroup, index, innerType, +variableValues, incrementalPublisher, parentIncrementalDataRecord): -- Let {path} and {iterator} be the corresponding entries on {stream}. - Let {incrementalErrors} be an empty list of _field error_ for the entire stream, including all _field error_ bubbling up to {path}. - Let {currentIndex} be {index}. - Let {currentParent} be {parentIncrementalDataRecord}. - Let {errored} be {false}. - Let {eventQueue} be the corresponding entry on {incrementalPublisher}. +- Let {streamFieldGroup} be the result of {GetStreamFieldGroup(fieldGroup)}. - Repeat the following steps: - Let {itemPath} be {path} with {currentIndex} appended. - Let {streamItems} be a new Stream Items Record. @@ -1828,7 +1817,6 @@ incrementalPublisher, parentIncrementalDataRecord): {id}. - Return. - Let {item} be the item retrieved from {iterator}. - - Let {streamFieldGroup} be the corresponding entry on {stream}. - Let {newDeferMap} be an empty unordered map. - Let {data} be the result of calling {CompleteValue(innerType, streamedFieldGroup, item, variableValues, itemPath, newDeferMap, @@ -1880,20 +1868,19 @@ incrementalPublisher, incrementalDataRecord): - Let {iterator} be an iterator for {result}. - Let {items} be an empty list. - Let {index} be zero. + - Let {eventQueue} be the corresponding entry on {incrementalPublisher}. - While {result} is not closed: - If {streamDirective} is defined and {index} is greater than or equal to {initialCount}: - - Let {streamFieldGroup} be the result of - {GetStreamFieldGroup(fieldGroup)}. - - Let {stream} be a new Stream Record created from {streamFieldGroup}, and - {iterator}. + - Let {stream} be a new Stream Record. - Let {id} be the corresponding entry on {stream}. - Let {earlyReturn} be the implementation-specific value denoting how to notify {iterator} that no additional items will be requested. - Enqueue a New Stream Event on {eventQueue} with details {id}, {label}, {path}, and {earlyReturn}. - - Call {ExecuteStreamField(stream, index, innerType, variableValues, - incrementalPublisher, incrementalDataRecord)}. + - Call {ExecuteStreamField(stream, path, iterator, fieldGroup, index, + innerType, variableValues, incrementalPublisher, + incrementalDataRecord)}. - Return {items}. - Otherwise: - Wait for the next item from {result} via the {iterator}. @@ -1913,21 +1900,20 @@ incrementalPublisher, incrementalDataRecord): - Let {objectType} be {fieldType}. - Otherwise if {fieldType} is an Interface or Union type. - Let {objectType} be {ResolveAbstractType(fieldType, result)}. - - Let {groupedFieldSet}, {newGroupedFieldSetDetails}, and {deferUsages} be the - result of {ProcessSubSelectionSets(objectType, fieldGroup, variableValues)}. + - Let {groupedFieldSet}, {groupDetailsMap}, and {deferUsages} be the result of + {ProcessSubSelectionSets(objectType, fieldGroup, variableValues)}. - Let {newDeferMap} be the result of {AddNewDeferFragments(incrementalPublisher, newDeferUsages, incrementalDataRecord, deferMap, path)}. - - Let {newDeferredGroupedFieldSets} be the result of - {AddNewDeferredGroupedFieldSets(incrementalPublisher, - newGroupedFieldSetDetails, newDeferMap, path)}. + - Let {detailsList} be the result of + {AddNewDeferredGroupedFieldSets(incrementalPublisher, groupDetailsMap, + newDeferMap, path)}. - Let {completed} be the result of evaluating {ExecuteGroupedFieldSet(groupedFieldSet, objectType, result, variableValues, path, newDeferMap, incrementalPublisher, incrementalDataRecord)} _normally_ (allowing for parallelization). - In parallel, call {ExecuteDeferredGroupedFieldSets(objectType, result, - variableValues, incrementalPublisher, newDeferredGroupedFieldSets, - newDeferredFragments, newDeferMap)}. + variableValues, incrementalPublisher, detailsList, newDeferMap)}. - Return {completed}. ProcessSubSelectionSets(objectType, fieldGroup, variableValues): From 813ea2c84694bb356325cd3fc63d6977c38d42c6 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 28 Sep 2023 22:11:50 +0300 Subject: [PATCH 65/65] use identifiers instead of records when possible only the Incremental Publisher subroutines need to maintain records --- spec/Section 6 -- Execution.md | 118 +++++++++++++-------------------- 1 file changed, 45 insertions(+), 73 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 0665502d6..32fbde3ce 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -343,10 +343,8 @@ Results that preclude their data from being sent. The Incremental Publisher provides an asynchronous iterator that resolves to the Subsequent Result stream. -Both the Execution algorithms and the Incremental Publisher service utilize -Incremental Result and Data Records to store required information. The records -are detailed below, including which entries are required for Execution itself -and which are required by the Incremental Publisher. +The Incremental Publisher service utilizes Incremental Result and Data Records +to store required information. The records are described in detail below. ### Incremental Delivery Records @@ -382,16 +380,11 @@ or a Stream Items Record. An Initial Result Record is a structure containing: -- {id}: an implementation-specific value uniquely identifying this record, - created if not provided. +- {id}: an implementation-specific value uniquely identifying this record. A Deferred Fragment Record is a structure that always contains: -- {id}: an implementation-specific value uniquely identifying this record, - created if not provided. - -Within the Incremental Publisher context, records of this type also include: - +- {id}: an implementation-specific value uniquely identifying this record. - {label}: value derived from the corresponding `@defer` directive. - {path}: a list of field names and indices from root to the location of the corresponding `@defer` directive. @@ -404,11 +397,7 @@ Within the Incremental Publisher context, records of this type also include: A Stream Items Record is a structure that always contains: -- {id}: an implementation-specific value uniquely identifying this record, - created if not provided. - -Within the Incremental Publisher context, records of this type also include: - +- {id}: an implementation-specific value uniquely identifying this record. - {path}: a list of field names and indices from root to the location of the corresponding list item contained by this Stream Items Record. - {stream}: the Stream Record which this Stream Items Record partially fulfills. @@ -423,11 +412,7 @@ Within the Incremental Publisher context, records of this type also include: A Stream Record is a structure that always contains: -- {id}: an implementation-specific value uniquely identifying this record, - created if not provided. - -Within the Incremental Publisher context, records of this type also include: - +- {id}: an implementation-specific value uniquely identifying this record. - {label}: value derived from the corresponding `@stream` directive. - {path}: a list of field names and indices from root to the location of the corresponding `@stream` directive. @@ -445,11 +430,7 @@ Grouped Field Set Record or a Stream Items Record. A Deferred Grouped Field Set Record is a structure that always contains: -- {id}: an implementation-specific value uniquely identifying this record, - created if not provided. - -Within the Incremental Publisher context, records of this type also include: - +- {id}: an implementation-specific value uniquely identifying this record. - {path}: a list of field names and indices from root to the location of this deferred grouped field set. - {deferredFragments}: a set of Deferred Fragment Records containing this @@ -918,7 +899,7 @@ CreateIncrementalPublisher(): - Remove the event from the queue. - Call {HandleExecutionEvent(eventType, eventDetails)}. - Wait for the next event or for {allResultsCompleted} to be set to {true}. - - If {allResultsCompleted} is {true}, return. + - If {allResultsCompleted} is {true}, return. - In parallel, set {subsequentResults} on {incrementalPublisher} to the result of lazily executing {YieldSubsequentResults()}. @@ -1023,19 +1004,19 @@ serial): - Let {groupedFieldSet} and {groupDetailsMap} be the result of calling {BuildGroupedFieldSets(fieldsByTarget, targetsByKey)}. - Let {incrementalPublisher} be the result of {CreateIncrementalPublisher()}. -- Let {initialResultRecord} be a new Initial Result Record. +- Initialize {initialResultId} to an identifier unique to this execution. - Let {newDeferMap} be the result of {AddNewDeferFragments(incrementalPublisher, - newDeferUsages, initialResultRecord)}. + newDeferUsages, initialResultId)}. - Let {detailsList} be the result of {AddNewDeferredGroupedFieldSets(incrementalPublisher, groupDetailsMap, newDeferMap)}. - Let {data} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, queryType, initialValue, variableValues, incrementalPublisher, - initialResultRecord)} _serially_ if {serial} is {true}, _normally_ (allowing + initialResultId)} _serially_ if {serial} is {true}, _normally_ (allowing parallelization) otherwise. - In parallel, call {ExecuteDeferredGroupedFieldSets(queryType, initialValues, variableValues, incrementalPublisher, detailsList, newDeferMap)}. -- Let {id} be the corresponding entry on {initialResultRecord}. +- Let {id} be the corresponding entry on {initialResultId}. - Let {errors} be the list of all _field error_ raised while executing the {groupedFieldSet}. - Initialize {initialResult} to an empty unordered map. @@ -1052,8 +1033,8 @@ serial): - Let {subsequentResults} be the corresponding entry on {incrementalPublisher}. - Return {initialResult} and {subsequentResults}. -AddNewDeferFragments(incrementalPublisher, newDeferUsages, -incrementalDataRecord, deferMap, path): +AddNewDeferFragments(incrementalPublisher, newDeferUsages, incrementalDataId, +deferMap, path): - Initialize {newDeferredGroupedFieldSets} to an empty list. - If {newDeferUsages} is empty: @@ -1065,21 +1046,20 @@ incrementalDataRecord, deferMap, path): - Set the entry for {deferUsage} in {newDeferMap} to {deferredFragment}. - Let {eventQueue} be the corresponding entry on {incrementalPublisher}. - For each {deferUsage} in {newDeferUsages}: + - Let {id} be a unique identifier for this execution. - Let {label} be the corresponding entry on {deferUsage}. - - Let {parent} be (GetParent(deferUsage, deferMap, incrementalDataRecord)). + - Let {parent} be (GetParent(deferUsage, deferMap, incrementalDataId)). - Let {parentId} be the entry for {id} on {parent}. - - Let {deferredFragment} be a new Deferred Fragment Record. - - Let {id} be the corresponding entry on {deferredFragment}. - - Enqueue a New Deferred Fragment Event on {eventQueue} with details {label}, - {path}, and {parentId}. - - Set the entry for {deferUsage} in {newDeferMap} to {deferredFragment}. + - Enqueue a New Deferred Fragment Event on {eventQueue} with details {id}, + {label}, {path}, and {parentId}. + - Set the entry for {deferUsage} in {newDeferMap} to {id}. - Return {newDeferMap}. -GetParent(deferUsage, deferMap, incrementalDataRecord): +GetParent(deferUsage, deferMap, incrementalDataId): - Let {ancestors} be the corresponding entry on {deferUsage}. - Let {parentDeferUsage} be the first member of {ancestors}. -- If {parentDeferUsage} is not defined, return {incrementalDataRecord}. +- If {parentDeferUsage} is not defined, return {incrementalDataId}. - Let {parent} be the corresponding entry in {deferMap} for {parentDeferUsage}. - Return {parent}. @@ -1087,14 +1067,12 @@ AddNewDeferredGroupedFieldSets(incrementalPublisher, groupDetailsMap, deferMap, path): - Initialize {detailsList} to an empty list. -- For each {deferUsageSet} and {details} in {groupDetailsMap}: +- For each {deferUsageSet} and {groupDetails} in {groupDetailsMap}: - Let {groupedFieldSet} and {shouldInitiateDefer} be the corresponding entries - on {details}. - - Let {deferredGroupedFieldSetRecord} be a new Deferred Grouped Field Set - Record. - - Initialize {recordDetails} to an empty unordered map. - - Set the corresponding entries on {recordDetails} to - {deferredGroupedFieldSetRecord}, {groupedFieldSet}, and + on {groupDetails}. + - Let {id} be an identifier unique to this execution. + - Initialize {details} to an empty unordered map. + - Set the corresponding entries on {details} to {id}, {groupedFieldSet}, and {shouldInitiateDefer}. - Let {deferredFragments} be the result of {GetDeferredFragments(deferUsageSet, newDeferMap)}. @@ -1106,7 +1084,7 @@ path): - Let {eventQueue} be the corresponding entry on {incrementalPublisher}. - Enqueue a New Deferred Grouped Field Set Event on {eventQueue} with details {id}, {path}, and {fragmentIds}. - - Append {recordDetails} to {detailsList}. + - Append {details} to {detailsList}. - Return {detailsList}. GetDeferredFragments(deferUsageSet, deferMap): @@ -1124,7 +1102,7 @@ type need to be known, as well as whether it must be executed serially, or may be executed in parallel. ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, variableValues, -path, deferMap, incrementalPublisher, incrementalDataRecord): +path, deferMap, incrementalPublisher, incrementalDataId): - If {path} is not provided, initialize it to an empty list. - Initialize {resultMap} to an empty ordered map. @@ -1138,7 +1116,7 @@ path, deferMap, incrementalPublisher, incrementalDataRecord): - If {fieldType} is defined: - Let {responseValue} be {ExecuteField(objectType, objectValue, fieldType, fieldGroup, variableValues, path, incrementalPublisher, - incrementalDataRecord)}. + incrementalDataId)}. - Set {responseValue} as the value for {responseKey} in {resultMap}. - Return {resultMap}. @@ -1564,9 +1542,10 @@ BuildGroupedFieldSets(fieldsByTarget, targetsByKey, parentTargets) - Append {fieldDetails} to the {fields} entry on {fieldGroup}. - Let {shouldInitiateDefer} be the corresponding entry on {targetSetDetails}. - Initialize {details} to an empty unordered map. - - Set the entry for {groupedFieldSet} in {details} to {newGroupedFieldSet}. - - Set the corresponding entry in {details} to {shouldInitiateDefer}. - - Set the entry for {maskingTargets} in {groupDetailsMap} to {details}. + - Set the entry for {groupedFieldSet} in {groupDetails} to + {newGroupedFieldSet}. + - Set the corresponding entry in {groupDetails} to {shouldInitiateDefer}. + - Set the entry for {maskingTargets} in {groupDetailsMap} to {groupDetails}. - Return {groupedFieldSet} and {groupDetailsMap}. Note: entries are always added to Grouped Field Set records in the order in @@ -1633,9 +1612,9 @@ IsSameSet(setA, setB): ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, incrementalPublisher, path, detailsList, deferMap) -- For each {recordDetails} in {detailsList}, allowing for parallelization: - - Let {deferredGroupedFieldSetRecord}, {groupedFieldSet}, and - {shouldInitiateDefer} be the corresponding entries on {recordDetails}. +- For each {details} in {detailsList}, allowing for parallelization: + - Let {id}, {groupedFieldSet}, and {shouldInitiateDefer} be the corresponding + entries on {details}. - If {shouldInitiateDefer} is {true}: - Initiate implementation specific deferral of further execution, resuming execution as defined. @@ -1643,7 +1622,6 @@ incrementalPublisher, path, detailsList, deferMap) objectType, objectValue, variableValues, path, deferMap, incrementalPublisher, deferredGroupedFieldSet)}. - Let {eventQueue} be the corresponding entry on {incrementalPublisher}. - - Let {id} be the corresponding entry on {deferredGroupedFieldSetRecord}. - If _field error_ were raised, causing a {null} to be propagated to {data}: - Let {incrementalErrors} be the list of such field errors. - Enqueue an Errored Deferred Grouped Field Set event with details {id} and @@ -1663,7 +1641,7 @@ finally completes that value either by recursively executing another selection set or coercing a scalar value. ExecuteField(objectType, objectValue, fieldType, fieldGroup, variableValues, -path, deferMap, incrementalPublisher, incrementalDataRecord): +path, deferMap, incrementalPublisher, incrementalDataId): - Let {fieldDetails} be the first entry in {fieldGroup}. - Let {node} be the corresponding entry on {fieldDetails}. @@ -1674,7 +1652,7 @@ path, deferMap, incrementalPublisher, incrementalDataRecord): - Let {resolvedValue} be {ResolveFieldValue(objectType, objectValue, fieldName, argumentValues)}. - Return the result of {CompleteValue(fieldType, fields, resolvedValue, - variableValues, path, deferMap, incrementalPublisher, incrementalDataRecord)}. + variableValues, path, deferMap, incrementalPublisher, incrementalDataId)}. ### Coercing Field Arguments @@ -1788,8 +1766,7 @@ variableValues, incrementalPublisher, parentIncrementalDataRecord): - Let {streamFieldGroup} be the result of {GetStreamFieldGroup(fieldGroup)}. - Repeat the following steps: - Let {itemPath} be {path} with {currentIndex} appended. - - Let {streamItems} be a new Stream Items Record. - - Let {id} be the corresponding entry on {streamItems}. + - Let {id} be an identifier unique to this execution. - Let {parentIds} be an empty list. - If {currentParent} is a Deferred Grouped Field Set Record. - Let {deferredFragments} be the corresponding entry on {currentParent}. @@ -1812,7 +1789,6 @@ variableValues, incrementalPublisher, parentIncrementalDataRecord): and {incrementalErrors}. - Return. - If an item is not retrieved because {iterator} has completed: - - Let {id} be the corresponding entry on {streamItems} - Enqueue a Completed Empty Stream Items Event on {eventQueue} with details {id}. - Return. @@ -1825,14 +1801,12 @@ variableValues, incrementalPublisher, parentIncrementalDataRecord): {innerType} is a Non-Nullable type, let {incrementalErrors} be the list of those errors: - Set {errored} to {true}. - - Let {id} be the corresponding entry on {streamItems} - Enqueue an Errored Stream Items Event on {eventQueue} with details {id} and {incrementalErrors}. - Return. - Let {errors} be the list of all _field error_ raised while completing this item. - Initialize {items} to an list containing the single item {data}. - - Let {id} be the corresponding entry on {streamItems} - Enqueue a Completed Stream Items Event on {eventQueue} with details {id}, {items}, and {errors}. - Increment {currentIndex}. @@ -1840,7 +1814,7 @@ variableValues, incrementalPublisher, parentIncrementalDataRecord): - Increment {index}. CompleteValue(fieldType, fieldGroup, result, variableValues, path, deferMap, -incrementalPublisher, incrementalDataRecord): +incrementalPublisher, incrementalDataId): - If the {fieldType} is a Non-Null type: - Let {innerType} be the inner type of {fieldType}. @@ -1872,15 +1846,13 @@ incrementalPublisher, incrementalDataRecord): - While {result} is not closed: - If {streamDirective} is defined and {index} is greater than or equal to {initialCount}: - - Let {stream} be a new Stream Record. - - Let {id} be the corresponding entry on {stream}. + - Let {id} be an identifier unique to this execution. - Let {earlyReturn} be the implementation-specific value denoting how to notify {iterator} that no additional items will be requested. - Enqueue a New Stream Event on {eventQueue} with details {id}, {label}, {path}, and {earlyReturn}. - Call {ExecuteStreamField(stream, path, iterator, fieldGroup, index, - innerType, variableValues, incrementalPublisher, - incrementalDataRecord)}. + innerType, variableValues, incrementalPublisher, incrementalDataId)}. - Return {items}. - Otherwise: - Wait for the next item from {result} via the {iterator}. @@ -1889,7 +1861,7 @@ incrementalPublisher, incrementalDataRecord): - Let {itemPath} be {path} with {index} appended. - Let {resolvedItem} be the result of calling {CompleteValue(innerType, fields, resultItem, variableValues, itemPath, deferMap, - incrementalPublisher, incrementalDataRecord)}. + incrementalPublisher, incrementalDataId)}. - Append {resolvedItem} to {items}. - Increment {index}. - Return {items}. @@ -1904,13 +1876,13 @@ incrementalPublisher, incrementalDataRecord): {ProcessSubSelectionSets(objectType, fieldGroup, variableValues)}. - Let {newDeferMap} be the result of {AddNewDeferFragments(incrementalPublisher, newDeferUsages, - incrementalDataRecord, deferMap, path)}. + incrementalDataId, deferMap, path)}. - Let {detailsList} be the result of {AddNewDeferredGroupedFieldSets(incrementalPublisher, groupDetailsMap, newDeferMap, path)}. - Let {completed} be the result of evaluating {ExecuteGroupedFieldSet(groupedFieldSet, objectType, result, variableValues, - path, newDeferMap, incrementalPublisher, incrementalDataRecord)} _normally_ + path, newDeferMap, incrementalPublisher, incrementalDataId)} _normally_ (allowing for parallelization). - In parallel, call {ExecuteDeferredGroupedFieldSets(objectType, result, variableValues, incrementalPublisher, detailsList, newDeferMap)}.