-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement CloudFormation backend. #803
Conversation
Regarding migration plans, my initial thoughts would be that we just run the old and new backend in parallel, but only use the new backend for newly created apps. Once we're comfortable with it, we should have a mostly (or entirely) automated way to schedule an existing release using the new scheduler, then destroy the old resources using the old scheduler. In psuedo code, something like:
|
@@ -301,6 +301,7 @@ | |||
{ | |||
"Effect": "Allow", | |||
"Action": [ | |||
"cloudformation:*", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When this is ready, need to determine exact permissions that Empire needs (and maybe we can lock this down so it can only access stacks that it created).
3262770
to
1919b2b
Compare
subnets = t.ExternalSubnetIDs | ||
} | ||
|
||
instancePort := int64(9000) // TODO: Allocate a port |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing that is a little more challenging in a CloudFormation world is instance port allocation. Because we no longer have direct control over creating and deleting load balancers, it's hard to allocate and release ports for them.
I was thinking that we could just add a custom resource with a lambda function that allocates and releases ports, so it's just native CloudFormation (and can be re-used outside of Empire).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or maybe better yet, use SNS backed custom resources and have Empire listen on an SQS queue to provision port allocations. Keeps everything more self contained.
Ok, once #811 is merged, this is feature complete and has all the existing functionality of the old scheduler. My be some minor bugs to fix, but this works beautifully in my tests locally. Once this is merged, I'll work on an automated migration scheduler that will migrate apps from the old scheduler to the new one. |
bbbb564
to
7e875cd
Compare
Also, it's worth mentioning that this is currently only enabled when the |
|
||
_, err = s.s3.PutObject(&s3.PutObjectInput{ | ||
Bucket: aws.String(s.Bucket), | ||
Key: aws.String(fmt.Sprintf("/%s", key)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since we have app id, i would include that in this key.
i could see it being useful to view different stack "versions" for a specific app.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good call. I'll update.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done in 84f55e5. The template is now stored in a folder prefixed with the app name and id.
👍 all of this looks awesome |
Ok. Gonna go ahead and merge this in. This won't become the default scheduling backend until there's an easy way to migrate from the old backend to the new one. My hope is that this becomes the default backend in the next release, with an automated migrator. Then we can remove the old backend, which frees us up to move faster on extended Procfile. There will probably be some minor bugs to fix in this, but we can deal with those as we see them. |
Eric this is so incredibly dope. I totally agree with your assessment with the extended Procfile and am glad Empire didn't go down that route. |
@dekz 🤘 super excited about this as well. Opens up a lot of possibilities for Empire. |
Fixes #556
Fixes #684
Probably fixes #665
Probably fixes #770
Can Fix #590 easily
Closes #560
Closes #578
Makes #630 trivial
Makes #706 trivial
Makes #723 trivial
Makes #796 trivial
Makes #797 trivial
WIP so, not everything is implemented (about 80-90%).
The deeper I go into adding the extended Procfile, the more I feel like we're just reimplementing terraform/cloudformation within Empire, which is not what I want to spend my time doing. The only reason we never started with a CloudFormation backend was because CloudFormation didn't support ECS when we built Empire :).
This adds support for a CloudFormation backend, so that AWS resources for an app are managed entirely by CloudFormation. We just pass a
scheduler.App
to a template and get a CloudFormation stack that represents the application.There's massive benefits to doing this:
The only disadvantage is we'll need to come up with a simple and safe migration plan (that's easy for other users of Empire as well), since this will require all new ECS/ELB resources for existing apps.
This will make almost everything we do inside Empire vastly simpler as we add features moving forward, so I think it's worth tackling now.
TODO