Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EC2MetadataCredentials and ECSCredentials retry #1114

Merged
merged 10 commits into from
Sep 8, 2016

Conversation

LiuJoyceC
Copy link
Contributor

Adds retry to sourcing credentials for IAM Roles in the EC2 Metadata Service or in ECS. By default will retry up to 3 times, with exponential backoff. Also for ECSCredentials, this PR will allow one asynchronous request to serve multiple refresh calls, so that at most only one request will be in flight at any point. Resolves #692

/cc: @chrisradek

…Credentials. Adds queuing of refresh callbacks to ECSCredentials to reduce the number of asynchronous requests to retrieve credentials.
@coveralls
Copy link

coveralls commented Aug 26, 2016

Coverage Status

Coverage increased (+0.3%) to 89.198% when pulling 44fb727 on LiuJoyceC:metadataServiceRetry into 7e3c279 on aws:master.

2 similar comments
@coveralls
Copy link

Coverage Status

Coverage increased (+0.3%) to 89.198% when pulling 44fb727 on LiuJoyceC:metadataServiceRetry into 7e3c279 on aws:master.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.3%) to 89.198% when pulling 44fb727 on LiuJoyceC:metadataServiceRetry into 7e3c279 on aws:master.

@@ -100,7 +100,7 @@ require('./credentials/credential_provider_chain');
* Currently supported options are:
*
* * **base** [Integer] — The base number of milliseconds to use in the
* exponential backoff for operation retries. Defaults to 100 ms.
* exponential backoff for operation retries. Defaults to 30 ms.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default used to be 30 ms but was changed to 100 ms with the configurable retry delays.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.3%) to 89.198% when pulling bb6745d on LiuJoyceC:metadataServiceRetry into 7e3c279 on aws:master.

2 similar comments
@coveralls
Copy link

coveralls commented Aug 30, 2016

Coverage Status

Coverage increased (+0.3%) to 89.198% when pulling bb6745d on LiuJoyceC:metadataServiceRetry into 7e3c279 on aws:master.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.3%) to 89.198% when pulling bb6745d on LiuJoyceC:metadataServiceRetry into 7e3c279 on aws:master.

@coveralls
Copy link

coveralls commented Aug 30, 2016

Coverage Status

Coverage increased (+0.3%) to 89.198% when pulling 90b410b on LiuJoyceC:metadataServiceRetry into 7e3c279 on aws:master.

@coveralls
Copy link

coveralls commented Aug 30, 2016

Coverage Status

Coverage increased (+0.3%) to 89.198% when pulling ea30bd7 on LiuJoyceC:metadataServiceRetry into 7e3c279 on aws:master.

@coveralls
Copy link

coveralls commented Aug 30, 2016

Coverage Status

Coverage increased (+0.3%) to 89.198% when pulling 1917d31 on LiuJoyceC:metadataServiceRetry into 7e3c279 on aws:master.

},

/**
* @api private
*/
refreshQueue: [],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the refresh queue only implemented for ECS credentials and not EC2 credentials? EC2 credentials has the same issue.

Copy link
Contributor Author

@LiuJoyceC LiuJoyceC Aug 31, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

EC2MetdataCredentials already had this feature, through the MetadataService. The refreshQueue on the ECSCredentials is analogous to the loadCredentialsCallbacks on the MetadataService. They're structured slightly differently but work the same way. (refreshQueue has to retain references to the instances of ECSCredentials because the data isn't given to the callback the way it is in MetadataService).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, so EC2 already supported this, excellent.

@coveralls
Copy link

coveralls commented Sep 1, 2016

Coverage Status

Coverage increased (+0.3%) to 89.219% when pulling 39705b1 on LiuJoyceC:metadataServiceRetry into 7e3c279 on aws:master.

if (statusCode < 300) {
cb(null, data);
} else {
var retryAfter = parseInt(httpResponse.headers['retry-after']) * 1000 || 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor thing, but might want to specify that the int is a decimal number when parsing parseInt(str, 10) for a little extra safety.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, I'll modify it.

@coveralls
Copy link

coveralls commented Sep 1, 2016

Coverage Status

Coverage increased (+0.3%) to 89.219% when pulling 783a1ab on LiuJoyceC:metadataServiceRetry into 7e3c279 on aws:master.

@coveralls
Copy link

coveralls commented Sep 2, 2016

Coverage Status

Coverage increased (+0.3%) to 89.219% when pulling 0c52b9f on LiuJoyceC:metadataServiceRetry into 7e3c279 on aws:master.

@coveralls
Copy link

coveralls commented Sep 6, 2016

Coverage Status

Coverage increased (+0.3%) to 89.219% when pulling e25adce on LiuJoyceC:metadataServiceRetry into 7e3c279 on aws:master.

1 similar comment
@coveralls
Copy link

coveralls commented Sep 6, 2016

Coverage Status

Coverage increased (+0.3%) to 89.219% when pulling e25adce on LiuJoyceC:metadataServiceRetry into 7e3c279 on aws:master.

@chrisradek
Copy link
Contributor

:shipit:

@LiuJoyceC LiuJoyceC merged commit 9f08143 into aws:master Sep 8, 2016
chrisradek added a commit that referenced this pull request Sep 9, 2016
@lock
Copy link

lock bot commented Sep 28, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs and link to relevant comments in this thread.

@lock lock bot locked as resolved and limited conversation to collaborators Sep 28, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Missing credentials in config happening intermittently
4 participants