Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

diag(sparse matrix) should be a sparse vector? #21064

Closed
StefanKarpinski opened this issue Mar 16, 2017 · 19 comments · Fixed by #23261
Closed

diag(sparse matrix) should be a sparse vector? #21064

StefanKarpinski opened this issue Mar 16, 2017 · 19 comments · Fixed by #23261
Assignees
Labels
Milestone

Comments

@StefanKarpinski
Copy link
Sponsor Member

Currently, when you take the diagonal of a sparse matrix, you get a dense vector:

julia> S = sprand(10, 10, 0.1)
10×10 SparseMatrixCSC{Float64,Int64} with 10 stored entries:
  [3 ,  1]  =  0.284124
  [2 ,  2]  =  0.627361
  [9 ,  5]  =  0.339561
  [5 ,  6]  =  0.848555
  [1 ,  8]  =  0.538286
  [3 ,  8]  =  0.703678
  [5 ,  8]  =  0.0960039
  [8 ,  8]  =  0.00592429
  [5 ,  9]  =  0.170894
  [1 , 10]  =  0.324079

julia> diag(S)
10-element Array{Float64,1}:
 0.0
 0.627361
 0.0
 0.0
 0.0
 0.0
 0.0
 0.00592429
 0.0
 0.0

This should arguably be a sparse vector since for a generic sparse matrix, the diagonal is sparse.

@andreasnoack
Copy link
Member

As usual with sparse matrices, it depends on the sparsity pattern. Sparse matrices from physical simulations would typically have a banded structure such that the diagonal is dense. For sparse matrix representations of graphs, it is probably less likely. We could do a survey based on Tim Davis' sparse matrix collection.

@martinholters
Copy link
Member

Putting dense data into a SparseVector is a waste of memory, but only by a smallish factor, while putting sparse data into a Vector may be a gigantic waste of memory (although contrary to the situation with matrices, it is less likely to exceed physical memory). So returning a sparse result would be the safer choice, wouldn't it?

@andreasnoack
Copy link
Member

I guess that argument applies to all arrays, i.e. even dense arrays. The safe choice that minimizes the worst-case relative memory waste for a dense matrix is to always produce a sparse vector. If the vector happens to have no zeros, there is a 2x waste. Returning a Vector if all the entries are zero is Inf x (except for the metadata) waste. Still, we think A[:,1] should be dense for A::Matrix because we believe that the entries typically will be nonzeros. So there is a trade-off that depends on what you consider a typical sparsity pattern. I don't have strong opinions, though. Just wanted to point out that this might be expensive for a large and very common class of sparse matrices.

@JeffBezanson
Copy link
Sponsor Member

I think dense is a reasonable choice here. My impression is that the sparse world generally considers O(n) size small enough.

@mbauman
Copy link
Sponsor Member

mbauman commented Mar 17, 2017

Given that our matrices are CSC, we're already storing a dense list of size O(n). The story might be different with a COO format.

@StefanKarpinski
Copy link
Sponsor Member Author

A SparseVector result would not be O(n).

@tkelman
Copy link
Contributor

tkelman commented Mar 17, 2017

off-diagonals using the 2-arg form aren't usually expected to be dense the way the main diagonal will more commonly be. so there's a pretty good argument for returning a sparse vector from diag(A::SparseMatrixCSC, k), and then it would be a bit strange for diag(A) and diag(A, 0) to return different types

@fredrikekre
Copy link
Member

julia> diag(A)
5-element Array{Float64,1}:
 0.774844
 0.0     
 0.443253
 0.0     
 0.0     

julia> diag(A,0)
5-element SparseVector{Float64,Int64} with 2 stored entries:
  [1]  =  0.774844
  [3]  =  0.44325

I got hit by this inconsistency in #22925

@StefanKarpinski StefanKarpinski added this to the 1.0 milestone Jul 31, 2017
@StefanKarpinski
Copy link
Sponsor Member Author

Would be great if a decision could be made here about which direction to go for consistency and make the different forms of diag match. We need a bat signal for linalg people...

@andreasnoack
Copy link
Member

The current situation is not completely unreasonable, though, since the diagonal will often be quite dense whereas many off-diagonals will be very sparse but I can also see that it would be nice if the two methods returned the same type.

Following up on @martinholters comment, I'd argue that the safest choice would be the one with the smallest absolute worst case memory usage. It is not the only concern or necessarily the main concern but I'd also argue that diag(A) is much more common than diag(A, k!=0).

Bottomline is that we disagree. Specifying a return type seems like a general problem. Maybe a solution could be to define spdiag?

@mbauman
Copy link
Sponsor Member

mbauman commented Jul 31, 2017

Possible alternative: make diag be always dense and make it easier to construct diagonal views either through an indexing syntax or some other function. That'd give an O(n) memory solution with fast access and an O(1) memory solution with (deferred) access complexity based on the source array.

@tkelman
Copy link
Contributor

tkelman commented Jul 31, 2017

diagonal views would be a good idea, but indexing into that would be more expensive than indexing into an eager sparse or dense diagonal slice (copy)

@StefanKarpinski
Copy link
Sponsor Member Author

diag(A) should behave the same as diag(A,0) – I don't care if that's sparse or dense, but it should be consistent.

@StefanKarpinski
Copy link
Sponsor Member Author

StefanKarpinski commented Aug 3, 2017

@tkelman makes the argument that if we go with always sparse:

  1. It will handle the typical off-diagonal case well;
  2. The potential upside – ratio of dense memory used to sparse memory used – is unbounded;
  3. The potential downside – the inverse ratio – is at most 2x, which is still not bad;
  4. Preserves potentially significant information about structural storage pattern – there's no way to recover this easily if we default to returning dense diagonals.

When someone writes full(diag(A)) we could potentially eventually optimize away the creation of the sparse vector and just write entries directly to the dense output vector without changing any semantics, giving the best of both worlds. The optimization in the other direction is much harder.

Resolved: always return a sparse vector here.

@andreasnoack
Copy link
Member

Might be worth browsing through the thumbnails at https://www.cise.ufl.edu/research/sparse/matrices/list_by_id.html to get an idea of typical density patterns of diagonals of typical sparse matrices.

@StefanKarpinski
Copy link
Sponsor Member Author

StefanKarpinski commented Aug 3, 2017

Yes, it's definitely true that the diagonal is often dense. Of course, if O(n) storage is ok for your data (as opposed to O(n^2)) then 2n is still ok, and that's the worst case for a dense diagonal. I would add that in the applications where I've used sparse matrices a lot, the diagonal is almost never dense (they're also usually not square, which may be related).

@tkelman
Copy link
Contributor

tkelman commented Aug 10, 2017

Looks like diag(::SparseMatrixCSC, ::Integer) is currently dispatching to the AbstractArray method here

diag(A::AbstractMatrix, k::Integer=0) = A[diagind(A,k)]

so might need to generalize the SpDiagIterator here

mutable struct SpDiagIterator{Tv,Ti}
A::SparseMatrixCSC{Tv,Ti}
n::Int
end
SpDiagIterator(A::SparseMatrixCSC) = SpDiagIterator(A,minimum(size(A)))
length(d::SpDiagIterator) = d.n
start(d::SpDiagIterator) = 1
done(d::SpDiagIterator, j) = j > d.n
function next(d::SpDiagIterator{Tv}, j) where Tv
A = d.A
r1 = Int(A.colptr[j])
r2 = Int(A.colptr[j+1]-1)
(r1 > r2) && (return (zero(Tv), j+1))
r1 = searchsortedfirst(A.rowval, j, r1, r2, Forward)
(((r1 > r2) || (A.rowval[r1] != j)) ? zero(Tv) : A.nzval[r1], j+1)
end

to handle off-diagonals. Haven't looked at it very closely but I think that shouldn't be too complicated?

@andreasnoack
Copy link
Member

Just tested this and deleting

diag(A::SparseMatrixCSC{Tv}) where {Tv} = Tv[d for d in SpDiagIterator(A)]
seems to be all that is required.

@tkelman
Copy link
Contributor

tkelman commented Aug 10, 2017

That would fix the return type, but the fallback is probably less efficient? Worth comparing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants