Skip to content

Commit

Permalink
feat: update L1 CloudFormation resource definitions (#30860)
Browse files Browse the repository at this point in the history
Updates the L1 CloudFormation resource definitions with the latest changes from `@aws-cdk/aws-service-spec`

**L1 CloudFormation resource definition changes:**
```
├[~] service aws-bedrock
│ └ resources
│    ├[~] resource AWS::Bedrock::Agent
│    │ └ types
│    │    ├[~] type GuardrailConfiguration
│    │    │ └  - documentation: Configuration information for a guardrail that you use with the `Converse` action.
│    │    │    + documentation: Configuration information for a guardrail that you use with the [Converse](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html) operation.
│    │    └[~] type S3Identifier
│    │      ├  - documentation: Contains information about the S3 object containing the resource.
│    │      │  + documentation: The identifier information for an Amazon S3 bucket.
│    │      └ properties
│    │         └ S3ObjectKey: (documentation changed)
│    ├[~] resource AWS::Bedrock::DataSource
│    │ ├ properties
│    │ │  ├ DataDeletionPolicy: (documentation changed)
│    │ │  └ DataSourceConfiguration: (documentation changed)
│    │ └ types
│    │    ├[~] type ChunkingConfiguration
│    │    │ └ properties
│    │    │    └ ChunkingStrategy: (documentation changed)
│    │    ├[~] type DataSourceConfiguration
│    │    │ ├  - documentation: Contains details about how a data source is stored.
│    │    │ │  + documentation: The connection configuration for the data source.
│    │    │ └ properties
│    │    │    ├ S3Configuration: (documentation changed)
│    │    │    └ Type: (documentation changed)
│    │    └[~] type S3DataSourceConfiguration
│    │      ├  - documentation: Contains information about the S3 configuration of the data source.
│    │      │  + documentation: The configuration information to connect to Amazon S3 as your data source.
│    │      └ properties
│    │         ├ BucketArn: (documentation changed)
│    │         ├ BucketOwnerAccountId: (documentation changed)
│    │         └ InclusionPrefixes: (documentation changed)
│    └[~] resource AWS::Bedrock::KnowledgeBase
│      └ types
│         └[~] type KnowledgeBaseConfiguration
│           └  - documentation: Contains details about the embeddings configuration of the knowledge base.
│              + documentation: Configurations to apply to a knowledge base attached to the agent during query. For more information, see [Knowledge base retrieval configurations](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-session-state.html#session-state-kb) .
├[~] service aws-cloudtrail
│ └ resources
│    ├[~] resource AWS::CloudTrail::EventDataStore
│    │ └ types
│    │    └[~] type AdvancedFieldSelector
│    │      └ properties
│    │         └ Field: (documentation changed)
│    └[~] resource AWS::CloudTrail::Trail
│      └ types
│         ├[~] type AdvancedFieldSelector
│         │ └ properties
│         │    └ Field: (documentation changed)
│         ├[~] type DataResource
│         │ └  - documentation: Data events provide information about the resource operations performed on or within a resource itself. These are also known as data plane operations. You can specify up to 250 data resources for a trail.
│         │    Configure the `DataResource` to specify the resource type and resource ARNs for which you want to log data events.
│         │    You can specify the following resource types in your event selectors for your trail:
│         │    - `AWS::DynamoDB::Table`
│         │    - `AWS::Lambda::Function`
│         │    - `AWS::S3::Object`
│         │    > The total number of allowed data resources is 250. This number can be distributed between 1 and 5 event selectors, but the total cannot exceed 250 across all selectors for the trail.
│         │    > 
│         │    > If you are using advanced event selectors, the maximum total number of values for all conditions, across all advanced event selectors for the trail, is 500. 
│         │    The following example demonstrates how logging works when you configure logging of all data events for an S3 bucket named `DOC-EXAMPLE-BUCKET1` . In this example, the CloudTrail user specified an empty prefix, and the option to log both `Read` and `Write` data events.
│         │    - A user uploads an image file to `DOC-EXAMPLE-BUCKET1` .
│         │    - The `PutObject` API operation is an Amazon S3 object-level API. It is recorded as a data event in CloudTrail. Because the CloudTrail user specified an S3 bucket with an empty prefix, events that occur on any object in that bucket are logged. The trail processes and logs the event.
│         │    - A user uploads an object to an Amazon S3 bucket named `arn:aws:s3:::DOC-EXAMPLE-BUCKET1` .
│         │    - The `PutObject` API operation occurred for an object in an S3 bucket that the CloudTrail user didn't specify for the trail. The trail doesn’t log the event.
│         │    The following example demonstrates how logging works when you configure logging of AWS Lambda data events for a Lambda function named *MyLambdaFunction* , but not for all Lambda functions.
│         │    - A user runs a script that includes a call to the *MyLambdaFunction* function and the *MyOtherLambdaFunction* function.
│         │    - The `Invoke` API operation on *MyLambdaFunction* is an Lambda API. It is recorded as a data event in CloudTrail. Because the CloudTrail user specified logging data events for *MyLambdaFunction* , any invocations of that function are logged. The trail processes and logs the event.
│         │    - The `Invoke` API operation on *MyOtherLambdaFunction* is an Lambda API. Because the CloudTrail user did not specify logging data events for all Lambda functions, the `Invoke` operation for *MyOtherLambdaFunction* does not match the function specified for the trail. The trail doesn’t log the event.
│         │    + documentation: You can configure the `DataResource` in an `EventSelector` to log data events for the following three resource types:
│         │    - `AWS::DynamoDB::Table`
│         │    - `AWS::Lambda::Function`
│         │    - `AWS::S3::Object`
│         │    To log data events for all other resource types including objects stored in [directory buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-overview.html) , you must use [AdvancedEventSelectors](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_AdvancedEventSelector.html) . You must also use `AdvancedEventSelectors` if you want to filter on the `eventName` field.
│         │    Configure the `DataResource` to specify the resource type and resource ARNs for which you want to log data events.
│         │    > The total number of allowed data resources is 250. This number can be distributed between 1 and 5 event selectors, but the total cannot exceed 250 across all selectors for the trail. 
│         │    The following example demonstrates how logging works when you configure logging of all data events for a general purpose bucket named `DOC-EXAMPLE-BUCKET1` . In this example, the CloudTrail user specified an empty prefix, and the option to log both `Read` and `Write` data events.
│         │    - A user uploads an image file to `DOC-EXAMPLE-BUCKET1` .
│         │    - The `PutObject` API operation is an Amazon S3 object-level API. It is recorded as a data event in CloudTrail. Because the CloudTrail user specified an S3 bucket with an empty prefix, events that occur on any object in that bucket are logged. The trail processes and logs the event.
│         │    - A user uploads an object to an Amazon S3 bucket named `arn:aws:s3:::DOC-EXAMPLE-BUCKET1` .
│         │    - The `PutObject` API operation occurred for an object in an S3 bucket that the CloudTrail user didn't specify for the trail. The trail doesn’t log the event.
│         │    The following example demonstrates how logging works when you configure logging of AWS Lambda data events for a Lambda function named *MyLambdaFunction* , but not for all Lambda functions.
│         │    - A user runs a script that includes a call to the *MyLambdaFunction* function and the *MyOtherLambdaFunction* function.
│         │    - The `Invoke` API operation on *MyLambdaFunction* is an Lambda API. It is recorded as a data event in CloudTrail. Because the CloudTrail user specified logging data events for *MyLambdaFunction* , any invocations of that function are logged. The trail processes and logs the event.
│         │    - The `Invoke` API operation on *MyOtherLambdaFunction* is an Lambda API. Because the CloudTrail user did not specify logging data events for all Lambda functions, the `Invoke` operation for *MyOtherLambdaFunction* does not match the function specified for the trail. The trail doesn’t log the event.
│         └[~] type EventSelector
│           └ properties
│              └ DataResources: (documentation changed)
├[~] service aws-cognito
│ └ resources
│    └[~] resource AWS::Cognito::UserPoolUICustomizationAttachment
│      └ attributes
│         └ Id: (documentation changed)
├[~] service aws-ecs
│ └ resources
│    ├[~] resource AWS::ECS::Service
│    │ └ types
│    │    └[~] type LogConfiguration
│    │      └ properties
│    │         └ LogDriver: (documentation changed)
│    └[~] resource AWS::ECS::TaskDefinition
│      ├ properties
│      │  └ Cpu: (documentation changed)
│      └ types
│         ├[~] type ContainerDefinition
│         │ └ properties
│         │    └ StartTimeout: (documentation changed)
│         └[~] type LogConfiguration
│           └ properties
│              └ LogDriver: (documentation changed)
├[~] service aws-fsx
│ └ resources
│    ├[~] resource AWS::FSx::FileSystem
│    │ └ types
│    │    ├[~] type OntapConfiguration
│    │    │ └ properties
│    │    │    ├ DeploymentType: (documentation changed)
│    │    │    ├ HAPairs: (documentation changed)
│    │    │    ├ PreferredSubnetId: (documentation changed)
│    │    │    └ ThroughputCapacityPerHAPair: (documentation changed)
│    │    └[~] type OpenZFSConfiguration
│    │      └ properties
│    │         └ DeploymentType: (documentation changed)
│    └[~] resource AWS::FSx::Volume
│      └ types
│         └[~] type AggregateConfiguration
│           └ properties
│              └ Aggregates: (documentation changed)
├[~] service aws-qbusiness
│ └ resources
│    ├[~] resource AWS::QBusiness::DataSource
│    │ └ properties
│    │    └ Configuration: (documentation changed)
│    └[~] resource AWS::QBusiness::WebExperience
│      └ properties
│         └ RoleArn: (documentation changed)
├[~] service aws-rds
│ └ resources
│    └[~] resource AWS::RDS::DBInstance
│      ├ properties
│      │  └ AutomaticBackupReplicationRegion: (documentation changed)
│      └ types
│         └[~] type ProcessorFeature
│           └  - documentation: The `ProcessorFeature` property type specifies the processor features of a DB instance class status.
│              + documentation: The `ProcessorFeature` property type specifies the processor features of a DB instance class.
└[~] service aws-sagemaker
  └ resources
     ├[~] resource AWS::SageMaker::DataQualityJobDefinition
     │ └ types
     │    └[~] type StoppingCondition
     │      └  - documentation: Specifies a limit to how long a model training job or model compilation job can run. It also specifies how long a managed spot training job has to complete. When the job reaches the time limit, SageMaker ends the training or compilation job. Use this API to cap model training costs.
     │         To stop a training job, SageMaker sends the algorithm the `SIGTERM` signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts, so the results of training are not lost.
     │         The training algorithms provided by SageMaker automatically save the intermediate results of a model training job when possible. This attempt to save artifacts is only a best effort case as model might not be in a state from which it can be saved. For example, if training has just started, the model might not be ready to save. When saved, this intermediate data is a valid model artifact. You can use it to create a model with `CreateModel` .
     │         > The Neural Topic Model (NTM) currently does not support saving intermediate model artifacts. When training NTMs, make sure that the maximum runtime is sufficient for the training job to complete.
     │         + documentation: Specifies a limit to how long a job can run. When the job reaches the time limit, SageMaker ends the job. Use this API to cap costs.
     │         To stop a training job, SageMaker sends the algorithm the `SIGTERM` signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts, so the results of training are not lost.
     │         The training algorithms provided by SageMaker automatically save the intermediate results of a model training job when possible. This attempt to save artifacts is only a best effort case as model might not be in a state from which it can be saved. For example, if training has just started, the model might not be ready to save. When saved, this intermediate data is a valid model artifact. You can use it to create a model with `CreateModel` .
     │         > The Neural Topic Model (NTM) currently does not support saving intermediate model artifacts. When training NTMs, make sure that the maximum runtime is sufficient for the training job to complete.
     ├[~] resource AWS::SageMaker::ModelBiasJobDefinition
     │ └ types
     │    └[~] type StoppingCondition
     │      └  - documentation: Specifies a limit to how long a model training job or model compilation job can run. It also specifies how long a managed spot training job has to complete. When the job reaches the time limit, SageMaker ends the training or compilation job. Use this API to cap model training costs.
     │         To stop a training job, SageMaker sends the algorithm the `SIGTERM` signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts, so the results of training are not lost.
     │         The training algorithms provided by SageMaker automatically save the intermediate results of a model training job when possible. This attempt to save artifacts is only a best effort case as model might not be in a state from which it can be saved. For example, if training has just started, the model might not be ready to save. When saved, this intermediate data is a valid model artifact. You can use it to create a model with `CreateModel` .
     │         > The Neural Topic Model (NTM) currently does not support saving intermediate model artifacts. When training NTMs, make sure that the maximum runtime is sufficient for the training job to complete.
     │         + documentation: Specifies a limit to how long a job can run. When the job reaches the time limit, SageMaker ends the job. Use this API to cap costs.
     │         To stop a training job, SageMaker sends the algorithm the `SIGTERM` signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts, so the results of training are not lost.
     │         The training algorithms provided by SageMaker automatically save the intermediate results of a model training job when possible. This attempt to save artifacts is only a best effort case as model might not be in a state from which it can be saved. For example, if training has just started, the model might not be ready to save. When saved, this intermediate data is a valid model artifact. You can use it to create a model with `CreateModel` .
     │         > The Neural Topic Model (NTM) currently does not support saving intermediate model artifacts. When training NTMs, make sure that the maximum runtime is sufficient for the training job to complete.
     ├[~] resource AWS::SageMaker::ModelExplainabilityJobDefinition
     │ └ types
     │    └[~] type StoppingCondition
     │      └  - documentation: Specifies a limit to how long a model training job or model compilation job can run. It also specifies how long a managed spot training job has to complete. When the job reaches the time limit, SageMaker ends the training or compilation job. Use this API to cap model training costs.
     │         To stop a training job, SageMaker sends the algorithm the `SIGTERM` signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts, so the results of training are not lost.
     │         The training algorithms provided by SageMaker automatically save the intermediate results of a model training job when possible. This attempt to save artifacts is only a best effort case as model might not be in a state from which it can be saved. For example, if training has just started, the model might not be ready to save. When saved, this intermediate data is a valid model artifact. You can use it to create a model with `CreateModel` .
     │         > The Neural Topic Model (NTM) currently does not support saving intermediate model artifacts. When training NTMs, make sure that the maximum runtime is sufficient for the training job to complete.
     │         + documentation: Specifies a limit to how long a job can run. When the job reaches the time limit, SageMaker ends the job. Use this API to cap costs.
     │         To stop a training job, SageMaker sends the algorithm the `SIGTERM` signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts, so the results of training are not lost.
     │         The training algorithms provided by SageMaker automatically save the intermediate results of a model training job when possible. This attempt to save artifacts is only a best effort case as model might not be in a state from which it can be saved. For example, if training has just started, the model might not be ready to save. When saved, this intermediate data is a valid model artifact. You can use it to create a model with `CreateModel` .
     │         > The Neural Topic Model (NTM) currently does not support saving intermediate model artifacts. When training NTMs, make sure that the maximum runtime is sufficient for the training job to complete.
     ├[~] resource AWS::SageMaker::ModelQualityJobDefinition
     │ └ types
     │    └[~] type StoppingCondition
     │      └  - documentation: Specifies a limit to how long a model training job or model compilation job can run. It also specifies how long a managed spot training job has to complete. When the job reaches the time limit, SageMaker ends the training or compilation job. Use this API to cap model training costs.
     │         To stop a training job, SageMaker sends the algorithm the `SIGTERM` signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts, so the results of training are not lost.
     │         The training algorithms provided by SageMaker automatically save the intermediate results of a model training job when possible. This attempt to save artifacts is only a best effort case as model might not be in a state from which it can be saved. For example, if training has just started, the model might not be ready to save. When saved, this intermediate data is a valid model artifact. You can use it to create a model with `CreateModel` .
     │         > The Neural Topic Model (NTM) currently does not support saving intermediate model artifacts. When training NTMs, make sure that the maximum runtime is sufficient for the training job to complete.
     │         + documentation: Specifies a limit to how long a job can run. When the job reaches the time limit, SageMaker ends the job. Use this API to cap costs.
     │         To stop a training job, SageMaker sends the algorithm the `SIGTERM` signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts, so the results of training are not lost.
     │         The training algorithms provided by SageMaker automatically save the intermediate results of a model training job when possible. This attempt to save artifacts is only a best effort case as model might not be in a state from which it can be saved. For example, if training has just started, the model might not be ready to save. When saved, this intermediate data is a valid model artifact. You can use it to create a model with `CreateModel` .
     │         > The Neural Topic Model (NTM) currently does not support saving intermediate model artifacts. When training NTMs, make sure that the maximum runtime is sufficient for the training job to complete.
     └[~] resource AWS::SageMaker::MonitoringSchedule
       └ types
          └[~] type StoppingCondition
            └  - documentation: Specifies a limit to how long a model training job or model compilation job can run. It also specifies how long a managed spot training job has to complete. When the job reaches the time limit, SageMaker ends the training or compilation job. Use this API to cap model training costs.
               To stop a training job, SageMaker sends the algorithm the `SIGTERM` signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts, so the results of training are not lost.
               The training algorithms provided by SageMaker automatically save the intermediate results of a model training job when possible. This attempt to save artifacts is only a best effort case as model might not be in a state from which it can be saved. For example, if training has just started, the model might not be ready to save. When saved, this intermediate data is a valid model artifact. You can use it to create a model with `CreateModel` .
               > The Neural Topic Model (NTM) currently does not support saving intermediate model artifacts. When training NTMs, make sure that the maximum runtime is sufficient for the training job to complete.
               + documentation: Specifies a limit to how long a job can run. When the job reaches the time limit, SageMaker ends the job. Use this API to cap costs.
               To stop a training job, SageMaker sends the algorithm the `SIGTERM` signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts, so the results of training are not lost.
               The training algorithms provided by SageMaker automatically save the intermediate results of a model training job when possible. This attempt to save artifacts is only a best effort case as model might not be in a state from which it can be saved. For example, if training has just started, the model might not be ready to save. When saved, this intermediate data is a valid model artifact. You can use it to create a model with `CreateModel` .
               > The Neural Topic Model (NTM) currently does not support saving intermediate model artifacts. When training NTMs, make sure that the maximum runtime is sufficient for the training job to complete.
```
  • Loading branch information
aws-cdk-automation committed Jul 15, 2024
1 parent 3524718 commit ce7a8d5
Show file tree
Hide file tree
Showing 5 changed files with 21 additions and 21 deletions.
4 changes: 2 additions & 2 deletions packages/@aws-cdk/cloudformation-diff/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@
},
"license": "Apache-2.0",
"dependencies": {
"@aws-cdk/aws-service-spec": "^0.1.10",
"@aws-cdk/service-spec-types": "^0.0.78",
"@aws-cdk/aws-service-spec": "^0.1.11",
"@aws-cdk/service-spec-types": "^0.0.79",
"chalk": "^4",
"diff": "^5.2.0",
"fast-deep-equal": "^3.1.3",
Expand Down
2 changes: 1 addition & 1 deletion packages/@aws-cdk/integ-runner/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@
"@aws-cdk/cloud-assembly-schema": "0.0.0",
"@aws-cdk/cloudformation-diff": "0.0.0",
"@aws-cdk/cx-api": "0.0.0",
"@aws-cdk/aws-service-spec": "^0.1.10",
"@aws-cdk/aws-service-spec": "^0.1.11",
"cdk-assets": "0.0.0",
"@aws-cdk/cdk-cli-wrapper": "0.0.0",
"aws-cdk": "0.0.0",
Expand Down
2 changes: 1 addition & 1 deletion packages/aws-cdk-lib/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@
"mime-types": "^2.1.35"
},
"devDependencies": {
"@aws-cdk/aws-service-spec": "^0.1.10",
"@aws-cdk/aws-service-spec": "^0.1.11",
"@aws-cdk/cdk-build-tools": "0.0.0",
"@aws-cdk/custom-resource-handlers": "0.0.0",
"@aws-cdk/pkglint": "0.0.0",
Expand Down
6 changes: 3 additions & 3 deletions tools/@aws-cdk/spec2cdk/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,9 @@
},
"license": "Apache-2.0",
"dependencies": {
"@aws-cdk/aws-service-spec": "^0.1.10",
"@aws-cdk/service-spec-importers": "^0.0.39",
"@aws-cdk/service-spec-types": "^0.0.78",
"@aws-cdk/aws-service-spec": "^0.1.11",
"@aws-cdk/service-spec-importers": "^0.0.40",
"@aws-cdk/service-spec-types": "^0.0.79",
"@cdklabs/tskb": "^0.0.3",
"@cdklabs/typewriter": "^0.0.3",
"camelcase": "^6",
Expand Down
28 changes: 14 additions & 14 deletions yarn.lock
Original file line number Diff line number Diff line change
Expand Up @@ -51,12 +51,12 @@
resolved "https://registry.npmjs.org/@aws-cdk/asset-node-proxy-agent-v6/-/asset-node-proxy-agent-v6-2.0.3.tgz#9b5d213b5ce5ad4461f6a4720195ff8de72e6523"
integrity sha512-twhuEG+JPOYCYPx/xy5uH2+VUsIEhPTzDY0F1KuB+ocjWWB/KEDiOVL19nHvbPCB6fhWnkykXEMJ4HHcKvjtvg==

"@aws-cdk/aws-service-spec@^0.1.10":
version "0.1.10"
resolved "https://registry.npmjs.org/@aws-cdk/aws-service-spec/-/aws-service-spec-0.1.10.tgz#8ba2ae746067e58e0a473167f85c0c6db2edeb8f"
integrity sha512-k7uxQNVDS8iKXDxojrNrcxV61QcDSzn/KmDU3/5aFhFCq5i4WOvS9M+7xXz8YmBP9ZxTEUqU2loq+LQyqoZPhA==
"@aws-cdk/aws-service-spec@^0.1.11":
version "0.1.11"
resolved "https://registry.npmjs.org/@aws-cdk/aws-service-spec/-/aws-service-spec-0.1.11.tgz#a0e2acddf9fb260a992ea813767525d9509ec657"
integrity sha512-OGsu1Z+xWZcUBmbBazcplYzXopweuZGd3HL8rBgn5LbSyGAeiRVsw8/EhwBg4/emUu+sw6L7PmDJ2igX8HWYMw==
dependencies:
"@aws-cdk/service-spec-types" "^0.0.78"
"@aws-cdk/service-spec-types" "^0.0.79"
"@cdklabs/tskb" "^0.0.3"

"@aws-cdk/lambda-layer-kubectl-v24@^2.0.242":
Expand All @@ -74,12 +74,12 @@
resolved "https://registry.npmjs.org/@aws-cdk/lambda-layer-kubectl-v30/-/lambda-layer-kubectl-v30-2.0.0.tgz#97c40d31e5350ce7170be5d188361118b1e39231"
integrity sha512-yES6NfrJ3QV1372lAZ2FLXp/no4bqDWBXeSREJdrpWjQzD0wvL/hCpHEyjZrzHhOi27YbMxFTQ3g9isKAul8+A==

"@aws-cdk/service-spec-importers@^0.0.39":
version "0.0.39"
resolved "https://registry.npmjs.org/@aws-cdk/service-spec-importers/-/service-spec-importers-0.0.39.tgz#189a6f88368cbe63310017492b48566ea74fe757"
integrity sha512-vc1h/ZHUIQWQihq0Nao2M/P/hVYwpJp1nbFfNW7OZyfA5tJ4s1G+NNgOgy8BfoofkmDbVzJfRfEfQvuFdgigoQ==
"@aws-cdk/service-spec-importers@^0.0.40":
version "0.0.40"
resolved "https://registry.npmjs.org/@aws-cdk/service-spec-importers/-/service-spec-importers-0.0.40.tgz#3f27aebe00a030067294194166fc06d3e700935e"
integrity sha512-JTIWU7+LK1uUvAo+7QekGqskJpM0wLAWrW6T8+eHP5SlJvy6Qt9sdENgYThFjDsgzVkveyaAS/VcARsLzEkJcA==
dependencies:
"@aws-cdk/service-spec-types" "^0.0.78"
"@aws-cdk/service-spec-types" "^0.0.79"
"@cdklabs/tskb" "^0.0.3"
ajv "^6"
canonicalize "^2.0.0"
Expand All @@ -90,10 +90,10 @@
glob "^8"
sort-json "^2.0.1"

"@aws-cdk/service-spec-types@^0.0.78":
version "0.0.78"
resolved "https://registry.npmjs.org/@aws-cdk/service-spec-types/-/service-spec-types-0.0.78.tgz#6b21c32ad49426795750ded7ff84e31120e9f34c"
integrity sha512-wrEevMQs9ubCA6DpHAcPgtxiCKeB2bAuWXlJm5EahL678+jD3ucP+kBctaRgExAzTw0rYISjS/tbRYRVvjpxDA==
"@aws-cdk/service-spec-types@^0.0.79":
version "0.0.79"
resolved "https://registry.npmjs.org/@aws-cdk/service-spec-types/-/service-spec-types-0.0.79.tgz#9efdc886768e3b3754826ad0da291b7ecc209775"
integrity sha512-of5gMJx8Qn54rh5bxnsTg12d2N4EFToIEyczeWOXsNYsmsDDJlsrswCsBySe0BwLWvRga2iv1kFk8W6f+cIAZQ==
dependencies:
"@cdklabs/tskb" "^0.0.3"

Expand Down

0 comments on commit ce7a8d5

Please sign in to comment.