-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CopyObject]: fix hardcoded URL, re-enable CopyObject support and tests #284
Conversation
This fixes the hardcoded URL for the initial HeadObject call used by the CopyObject method. Also re-enable the tests that were accordingly disabled in awslabs#246.
Provide `CopyObject` support based on awslabs/aws-c-s3#284 In contrast to the existing `CopyObject` function, objects larger than 5GB are supported, and multipart copies are automatically handled in parallel.
source/s3_request_messages.c
Outdated
struct aws_http_message *base_message) { | ||
AWS_PRECONDITION(allocator); | ||
if (source_bucket.len == 0 || request_path.len == 0) { | ||
AWS_LOGF_ERROR(AWS_LS_S3_GENERAL, "CopyRequest x-amz-copy-source header not in bucket/key format"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
trivial: maybe printout the wrong value to simplify the debug, if not sensitive?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If user uses arn
as source, it may still have /
in it and get through this check, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added printout.
ARNs only support / in the resource section meaning that bucket name would have a bunch of non-dns compliant characters and request will fail. arguable, but for simplicity sake i think its fine until we can fix support for copy to work in all cases
|
||
struct aws_http_message *message = NULL; | ||
if (aws_byte_buf_init_copy_from_cursor(&head_object_host_header, allocator, source_bucket)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
trivial: we could initialize the buffer once with the proper length to avoid further allocation when we do append_dynamic
later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we could, but code looks slightly uglier since we need to break init into init + append. Personally, i prefer this variant slightly more even if its a little less efficient
++num_parts; | ||
} | ||
aws_s3_calculate_optimal_mpu_part_size_and_num_parts( | ||
copy_object->synced_data.content_length, s_min_copy_part_size, max_part_size, &part_size, &num_parts); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we ever mention in the public doc that there is a min_copy_part_size? It will override the part size set from user, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we never documented it and it was internal constant ever since we introduced this class.
for copy it needs to be quite a bit bigger than client part size and we probably want to expose separate config for it.
added todo
Provide `CopyObject` support based on awslabs/aws-c-s3#284 In contrast to the existing `CopyObject` function, objects larger than 5GB are supported, and multipart copies are automatically handled in parallel.
Provide `CopyObject` support based on awslabs/aws-c-s3#284 In contrast to the existing `CopyObject` function, objects larger than 5GB are supported, and multipart copies are automatically handled in parallel.
Provide `CopyObject` support based on awslabs/aws-c-s3#284 In contrast to the existing `CopyObject` function, objects larger than 5GB are supported, and multipart copies are automatically handled in parallel.
Provide `CopyObject` support based on awslabs/aws-c-s3#284 In contrast to the existing `CopyObject` function, objects larger than 5GB are supported, and multipart copies are automatically handled in parallel.
This fixes the hardcoded URL for the initial HeadObject call used by the CopyObject method. Also re-enable the tests that were accordingly disabled in #246.
Resolves #282.
With a suitable bucket structure (using us-east-1), tests are now passing:
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.