Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Autoscaling Opsworks Configuration #3165

Closed
wants to merge 9 commits into from
Closed

Conversation

eduherraiz
Copy link

Added in opsworks_custom_layer the necessary options to configure load parameters
I follow the contributing indications in order to:

  • Follow the names on AWS API
  • Document the options in the markdown files
  • Add acceptance tests and pass it.

make testacc TEST=./aws TESTARGS='-run=TestAccAWSOpsworksCustomLayer' ==> Checking that code complies with gofmt requirements... TF_ACC=1 go test ./aws -v -run=TestAccAWSOpsworksCustomLayer -timeout 120m === RUN TestAccAWSOpsworksCustomLayerImportBasic --- PASS: TestAccAWSOpsworksCustomLayerImportBasic (107.21s) === RUN TestAccAWSOpsworksCustomLayer --- PASS: TestAccAWSOpsworksCustomLayer (104.25s) PASS ok github.com/terraform-providers/terraform-provider-aws/aws 211.472s

make testacc TEST=./aws TESTARGS='-run=TestAccAWSOpsworksCustomLayer'
==> Checking that code complies with gofmt requirements...
TF_ACC=1 go test ./aws -v -run=TestAccAWSOpsworksCustomLayer -timeout 120m
=== RUN   TestAccAWSOpsworksCustomLayerImportBasic
--- PASS: TestAccAWSOpsworksCustomLayerImportBasic (107.21s)
=== RUN   TestAccAWSOpsworksCustomLayer
--- PASS: TestAccAWSOpsworksCustomLayer (104.25s)
PASS
ok      github.com/terraform-providers/terraform-provider-aws/aws       211.472s
@ghost ghost added the size/L Managed by automation to categorize the size of a PR. label Jan 28, 2018
@radeksimko radeksimko added service/opsworks Issues and PRs that pertain to the opsworks service. enhancement Requests to existing resources that expand the functionality or scope. labels Jan 28, 2018
@eduherraiz
Copy link
Author

If you have any proposal of change I can work on it!

@commixon
Copy link

commixon commented Oct 1, 2018

I'm really interested in this one. For now I have this one merged in my custom plugin but would really like it to be merged upstream. Also I would propose 2 changes.

  • First of all, downscaling_mem_threshold and upscaling_mem_threshold have no default value and this causes an error.
  • Secondly I would propose Default values of -1 (which translates to disabling). This is more intuitive in my opinion, otherwise you end up with Default values without you explicitly setting them:
diff --git a/aws/opsworks_layers.go b/aws/opsworks_layers.go
index 50665a015..498c5bfeb 100644
--- a/aws/opsworks_layers.go
+++ b/aws/opsworks_layers.go
@@ -218,7 +218,7 @@ func (lt *opsworksLayerType) SchemaResource() *schema.Resource {
                "downscaling_cpu_threshold": {
                        Type:     schema.TypeFloat,
                        Optional: true,
-                       Default:  30,
+                       Default:  -1, // -1 disables the threshold
                },

                "downscaling_ignore_metrics_time": {
@@ -241,6 +241,7 @@ func (lt *opsworksLayerType) SchemaResource() *schema.Resource {
                "downscaling_mem_threshold": {
                        Type:     schema.TypeFloat,
                        Optional: true,
+                       Default:  -1, // -1 disables the threshold
                },

                "downscaling_threshold_wait_time": {
@@ -258,7 +259,7 @@ func (lt *opsworksLayerType) SchemaResource() *schema.Resource {
                "upscaling_cpu_threshold": {
                        Type:     schema.TypeFloat,
                        Optional: true,
-                       Default:  80,
+                       Default:  -1, // -1 disables the threshold
                },

                "upscaling_ignore_metrics_time": {
@@ -281,6 +282,7 @@ func (lt *opsworksLayerType) SchemaResource() *schema.Resource {
                "upscaling_mem_threshold": {
                        Type:     schema.TypeFloat,
                        Optional: true,
+                       Default:  -1, // -1 disables the threshold
                },

@eduherraiz
Copy link
Author

Thanks to review it!
I was configured this default values because are the default values in the Autoscaling Opsworks configuration. I think you are right and is better to set the default value to -1.

@commixon
Copy link

commixon commented Oct 1, 2018

Nice PR btw. I am currently using it in production. Do you know if there is any update as to when this is going to be merged upstream?

@eduherraiz
Copy link
Author

No, sorry!
I'm using it in production too for a long time.
I suposse opsworks is not a priority by the terraform group. 😞

Copy link
Contributor

@bflad bflad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @eduherraiz 👋 Sorry for the lengthy review delay. We have quite the backlog to work through. 😅

I left some initial comments below and am curious if you can implement the feedback suggested by @commixon as well. Let us know if you don't have time or have any questions, thanks!

return err
}

if autoScalings.LoadBasedAutoScalingConfigurations == nil || len(autoScalings.LoadBasedAutoScalingConfigurations) == 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section could be simplified to:

d.Set("autoscaling", false)
if autoScalings.LoadBasedAutoScalingConfigurations != nil && len(autoScalings.LoadBasedAutoScalingConfigurations) != 0 && autoScalings.LoadBasedAutoScalingConfigurations[0] != nil {
  lt.SetAutoscaling(d, autoScalings.LoadBasedAutoScalingConfigurations[0])
}

@@ -308,7 +386,20 @@ resource "aws_opsworks_custom_layer" "tf-acc" {
mount_point = "/home"
size = 100
raid_level = 0
}
},
autoscaling = true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please split this out into its own acceptance test configuration and acceptance test function? We should verify that not having these specified does not generate any unexpected differences. 👍

@bflad bflad added the waiting-response Maintainers are waiting on response from community or contributor. label Oct 9, 2018
@ghost ghost added size/XL Managed by automation to categorize the size of a PR. documentation Introduces or discusses updates to documentation. tests PRs: expanded test coverage. Issues: expanded coverage, enhancements to test infrastructure. and removed size/L Managed by automation to categorize the size of a PR. labels Oct 9, 2018
LayerId: aws.String(d.Id()),
DownScaling: &opsworks.AutoScalingThresholds{
Alarms: expandStringList(d.Get("downscaling_alarms").([]interface{})),
CpuThreshold: aws.Float64(float64(d.Get("downscaling_cpu_threshold").(float64))),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, these float64() of float64 types are triggering make lint failures in CI since they are unnecessary:

$ make lint
==> Checking source code against linters...
aws/opsworks_layers.go:1::warning: file is not gofmted with -s (gofmt)
aws/opsworks_layers.go:774:43:warning: unnecessary conversion (unconvert)
aws/opsworks_layers.go:777:43:warning: unnecessary conversion (unconvert)
aws/opsworks_layers.go:778:43:warning: unnecessary conversion (unconvert)
aws/opsworks_layers.go:783:43:warning: unnecessary conversion (unconvert)
aws/opsworks_layers.go:786:43:warning: unnecessary conversion (unconvert)
aws/opsworks_layers.go:787:43:warning: unnecessary conversion (unconvert)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh! Thank you. I will check it tomorrow.

@bflad
Copy link
Contributor

bflad commented Nov 14, 2018

Hi @eduherraiz 👋 Will you have a chance to finish this up? No worries if not, just please let us know so one of the community members or maintainers can pick this up. Thanks!

@eduherraiz
Copy link
Author

I think the code is working properly.
I was tried complete the tests, but I'm a bit stucked with the correct syntax and dependencies of tests.

@ghost ghost removed the waiting-response Maintainers are waiting on response from community or contributor. label Nov 16, 2018
@aeschright aeschright requested a review from a team June 25, 2019 19:26
@xsalazar
Copy link
Contributor

While the idea here is good, there are some issues with this implementation. By default, right now, these settings are set as follows within the AWS API when creating a brand new OpsWorks Layer from scratch:

xsalazar@ubuntu:~/code/sandbox$ aws --profile=terraform --region=us-east-1 opsworks describe-load-based-auto-scaling --layer-id=500f9af3-65b9-4d6a-9ed4-3c06d3ec9b14
{
    "LoadBasedAutoScalingConfigurations": [
        {
            "DownScaling": {
                "Alarms": [], 
                "IgnoreMetricsTime": 10, 
                "ThresholdsWaitTime": 10, 
                "InstanceCount": 1, 
                "CpuThreshold": 30.0
            }, 
            "Enable": false, 
            "UpScaling": {
                "Alarms": [], 
                "IgnoreMetricsTime": 5, 
                "ThresholdsWaitTime": 5, 
                "InstanceCount": 1, 
                "CpuThreshold": 80.0
            }, 
            "LayerId": "500f9af3-65b9-4d6a-9ed4-3c06d3ec9b14"
        }
    ]
}

Note the default values, as well as the Enable flag set to false.

Even if you don't want to enable this setting, but try to apply the default Threshold values in this PR as -1 (disabled), the AWS API will throw an exception:

xsalazar@ubuntu:~/code/sandbox$ aws --profile=terraform --region=us-east-1 opsworks set-load-based-auto-scaling --layer-id=500f9af3-65b9-4d6a-9ed4-3c06d3ec9b14 --up-scaling CpuThreshold=-1,MemoryThreshold=-1,LoadThreshold=-1

An error occurred (ValidationException) when calling the SetLoadBasedAutoScaling operation: 

It seems as if at least one threshold value must be set, even if the load based autoscaling is disabled all together. So, this PR as is will not work currently because, if you did not set any of these values, they will be initialized all as -1, and the apply will fail.

We could slightly work around this by setting the defaults as they truly are in the AWS API, namely:

downscaling.ignore_metrics_time = 10
downscaling.thresholds_wait_time = 10
downscaling.instance_count = 1
downscaling.cpu_threshold = 30

upscaling.ignore_metrics_time = 5
upscaling.thresholds_wait_time = 5
upscaling.instance_count = 1
upscaling_cpu_threshold = 80

But then you run into a scenario where you'd need to explicitly set the thresholds you don't care about to -1 to disable them. Which is neither a standard pattern nor super intuitive.

Finally, I think a more preferably structure for setting this in HCL would look like:

load_based_autoscaling {
  enabled = true
  upscaling {
    alarms = []
    cpu_threshold = ...
    ignore_metrics_time = ...
    instance_count = ...
    load_threshold = ...
    memory_threshold = ...
    thresholds_wait_time = ...
  }

  downscaling {
    alarms = []
    cpu_threshold = ...
    ignore_metrics_time = ...
    instance_count = ...
    load_threshold = ...
    memory_threshold = ...
    thresholds_wait_time = ...
  }
}

This would give the added benefit of being able to reuse the internal complex structure for both downscaling and upscaling.

@eduherraiz
Copy link
Author

Hello @xsalazar, thank you for your comments!
I can't make this changes because is not useful for me the feature now.
I'll be happy to know if you finish this integration. :-D

@xsalazar
Copy link
Contributor

@eduherraiz I figured given how old this PR was, it was past its relevance! I just ran into the issue of needing to provision this setting recently and dug into your solution to solve it upstream! Thank you for the initial work; I'm definitely still thinking about working on it some more when I have time

@hwoarang
Copy link

Can we close this one in favor of #10962 ?

Base automatically changed from master to main January 23, 2021 00:55
@breathingdust breathingdust requested a review from a team as a code owner January 23, 2021 00:55
@zhelding
Copy link
Contributor

Pull request #21306 has significantly refactored the AWS Provider codebase. As a result, most PRs opened prior to the refactor now have merge conflicts that must be resolved before proceeding.

Specifically, PR #21306 relocated the code for all AWS resources and data sources from a single aws directory to a large number of separate directories in internal/service, each corresponding to a particular AWS service. This separation of code has also allowed for us to simplify the names of underlying functions -- while still avoiding namespace collisions.

We recognize that many pull requests have been open for some time without yet being addressed by our maintainers. Therefore, we want to make it clear that resolving these conflicts in no way affects the prioritization of a particular pull request. Once a pull request has been prioritized for review, the necessary changes will be made by a maintainer -- either directly or in collaboration with the pull request author.

For a more complete description of this refactor, including examples of how old filepaths and function names correspond to their new counterparts: please refer to issue #20000.

For a quick guide on how to amend your pull request to resolve the merge conflicts resulting from this refactor and bring it in line with our new code patterns: please refer to our Service Package Refactor Pull Request Guide.

@ewbankkit
Copy link
Contributor

Superseded by #10962.

@ewbankkit ewbankkit closed this Aug 11, 2022
@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 11, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
documentation Introduces or discusses updates to documentation. enhancement Requests to existing resources that expand the functionality or scope. service/opsworks Issues and PRs that pertain to the opsworks service. size/XL Managed by automation to categorize the size of a PR. tests PRs: expanded test coverage. Issues: expanded coverage, enhancements to test infrastructure.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants