Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

solving problems with non-Float64 types #890

Open
goulart-paul opened this issue Oct 3, 2023 · 5 comments
Open

solving problems with non-Float64 types #890

goulart-paul opened this issue Oct 3, 2023 · 5 comments

Comments

@goulart-paul
Copy link

Trying to solve problems with solvers that are configured for something other than Float64 lead to failures. I am not sure if this is a solver issue or a PowerModels one, but it happens with more than one solver.

Example:

solve_dc_opf("whatever.m",COSMO.Optimizer) #works
solve_dc_opf("whatever.m",COSMO.Optimizer{Float64}) #works
solve_dc_opf("whatever.m",COSMO.Optimizer{Float32}) #errors
solve_dc_opf("whatever.m",COSMO.Optimizer{BigFloat}) #errors

The same holds for Clarabel.Optimizer{Float32} etc.

The stack trace for the error is quite long, bu the gist of it is that it appears that constraints and objectives of parametrized as Float64 are still reaching the model.

@ccoffrin
Copy link
Member

ccoffrin commented Oct 3, 2023

Hi @goulart-paul this is an interesting question. I don't know much about the new type-specific JuMP models but for sure PowerModels was not designed with this feature in mind, it will be interesting to see if we can get it work. I hope @odow will be able to provide some insights.

My guess would be that PowerModels builds a JuMP model that is Float64, so when you give a solver with a type that does not match this there is a mismatch.

One little known trick is that you can pass a JuMP model into these functions. So maybe you could try something like,

m = Model{Float32}()
solve_dc_opf("whatever.m", COSMO.Optimizer{Float32}, jump_model=m)

This may just push around the error to a different part of PowerModel's code where it starts building the model, but worth a try.

I don't know what would be require to make PowerModels support this in a robust and flexible way, but I do think it would be a good feature to have.

@goulart-paul
Copy link
Author

A partial fix seems possible by just changing one line here:

Base.getindex(v::JuMP.VariableRef, i::Int) = v

to

Base.getindex(v::JuMP.GenericVariableRef, i::Int) = v

Then this works:

using Clarabel, JuMP, PowerModels
m = GenericModel{BigFloat}()
solve_dc_opf("pglib_opf_case14_ieee.m", Clarabel.Optimizer{BigFloat}, jump_model=m)

or at least it works in the sense that a problem is presented to the solver in BigFloat format, and the solvers gives the same solution as in the Float64 case. There are lot of post-processing warnings thrown by InfrastructureModels though, i.e. this many times:

[warn | InfrastructureModels]: build_solution_values found unknown type GenericVariableRef{BigFloat}

Going down in precision doesn't work even with the above modification, i.e. substituting Float32 for BigFloat everywhere above produces a lot of errors. This is almost certainly because the code is using constant values that are implicitly Float64 everywhere. The compiler is happy to promote things to BigFloat but doesn't want to demote to Float32, so the model gets constraints of Float64 type. Fixing that would be sort of annoying and would probably entail making all functions that involve such constants parametric on some type T associated with the model type, and then peppering the code with one(T) and T(2.0) type constructs everywhere.

Other problem types, e.g. SOCPs, still fail, so I will see if I can find an easy fix there as well. We are particularly interested in using PowerModels as a source of SOCP benchmarks, and were finding that some fraction of SOCPs terminate only with reduced accuracy. My suspicion is that most of those problems might be solvable with a 128 bit type.

@goulart-paul
Copy link
Author

SOCPs appear to be fixed by modifying this file in InfrastructureModels. I just replaced JuMP.Model with JuMP.GenericModel everywhere, and JuMP.VariableRef with JuMP.GenericVariableRef, and suddenly everything works with higher precision types.

I am sorry not to make a PR of this, but I think there are probably a lot of other places that require a similar treatment and I have only a very dim understanding of what is going on with this code. I couldn't get SDPs to work, though it seems like it should be possible. For the moment we are happy to have working SOCPs for benchmarking.

@odow
Copy link
Collaborator

odow commented Oct 4, 2023

Wait up, why is this method even here??? 🏴‍☠️

Base.getindex(v::JuMP.VariableRef, i::Int) = v

@odow
Copy link
Collaborator

odow commented Oct 4, 2023

But yes, there are probably a bunch of places that we need to replace hard-coded constants with the appropriate T(x) version.

I don't know if there's anything fundamental limiting, it's just a matter of grinding through the different cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants