Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overlap with LinearMaps.Block[Diagonal]Map #4

Open
dkarrasch opened this issue Feb 25, 2022 · 0 comments
Open

Overlap with LinearMaps.Block[Diagonal]Map #4

dkarrasch opened this issue Feb 25, 2022 · 0 comments

Comments

@dkarrasch
Copy link

I just came across this package. I like what I see and feel strongly reminded of the concatenation feature in LinearMaps.jl. Out of curiosity I did some benchmarks following your README:

using LinearAlgebra
d = 512
n, m = 3, 4
A = [randn(d, d) for i in 1:n, j in 1:m]
using BenchmarkTools
using BlockFactorizations
B = BlockFactorization(A)
x = randn(d*m)
y = randn(d*n)

using LinearMaps
import LinearMaps: LinearMap
function blockmapify(A::Matrix{<:LinearMaps.MapOrVecOrMat})
    m, n = size(A)
    tA = permutedims(A)
    hvcat(
        Tuple(n for _ in 1:m),
        Tuple(LinearMaps.convert_to_lmaps_(a) for a in tA)...
    )
end
blockmapify(D::Diagonal{<:LinearMaps.MapOrVecOrMat}) =
    cat(LinearMaps.convert_to_lmaps_.(D.diag)..., dims=(1,2))

L = blockmapify(A)
@btime mul!($y, $B, $x)
@btime mul!($y, $L, $x)

A = [Diagonal(randn(d)) for i in 1:n, j in 1:m]
B = BlockFactorization(A)
L = blockmapify(A)
x = randn(d*m)
y = zeros(d*n)
@btime mul!($y, $B, $x)
@btime mul!($y, $L, $x)

d = 512
n = 16
# testing Diagonal BlockFactorization
A = Diagonal([(randn(d, d)) for i in 1:n])
B = BlockFactorization(A)
L = blockmapify(A)
x = randn(d*n)
y = zeros(d*n)
@btime mul!($y, $B, $x);
@btime mul!($y, $L, $x);

and I get

 909.636 μs (13 allocations: 1.17 KiB)
  892.634 μs (0 allocations: 0 bytes)

  11.907 μs (25 allocations: 2.48 KiB)
  7.973 μs (45 allocations: 3.09 KiB)

  1.270 ms (13 allocations: 2.22 KiB)
  1.268 ms (6 allocations: 1.27 KiB)

Allocations in the second example are due to an issue with Diagonal which is fixed on v1.8. Depending on your future plans, I was wondering if you would want to consider mapping your BlockFactorization to BlockMaps and develop there further depending on your needs? That would give you the additional advantage of having completely lazy "matrix" algebra, like scaling, addition, composition, etc. Note however that there is no guarantee for fast getindex and no plans for any kind of system solving (this is reserved to packages for iterative solvers). Anyway, I just wanted to reach out to see how different packages in the ecosystem can collaborate nicely and to avoid extra effort/join forces.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant