Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GraphOps refactor #1544

Merged
merged 11 commits into from
Mar 21, 2024
Merged

GraphOps refactor #1544

merged 11 commits into from
Mar 21, 2024

Conversation

ljeub-pometry
Copy link
Collaborator

What changes were proposed in this pull request?

This changes the way the internal graph operations are implemented. The internal view api now acts as a filter that is passed down to the underlying storage for execution. This simplifies the underlying execution and allows us to expose parallel iterators (wip)

Why are the changes needed?

Prepare for more parallel apis and general simplification

Does this PR introduce any user-facing change? If yes is this documented?

the public api is unaffected

How was this patch tested?

all the tests still pass after refactor

Are there any further changes required?

A lot more cleanup and refactor needed to expose parallel execution to python


fn base(&self) -> &(dyn BoxableGraphView<'static> + Send + Sync + 'static) {
fn base(&self) -> &(dyn BoxableGraphView + Send + Sync + 'static) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

meaning?

fn base(&self) -> &Self::Base

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The lifetime for BoxableGraphView is no longer needed because it is not dealing with the iterators any more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what I meant is you can just use Self::Base instead of copying the type that is right above

fn edge_filter(&self) -> Option<&EdgeFilter> {
None
///
fn edges_filtered(&self) -> bool {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

empty comment and we don't seem to inline these but we do NodeFilterOps

e.time_t()
.or_else(|| self.edge_additions(e, layer_ids).first_t())
.or_else(|| self.edge_additions(e, layer_ids.clone()).first_t())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should edge_additions also take &LayerIds?

match e.time().map(|ti| ti.t()) {
Some(t) => Some(min(
self.edge_additions(e, layer_ids.clone())
.range(t.saturating_add(1)..i64::MAX)
.first_t()
.unwrap_or(i64::MAX),
self.edge_deletions(e, layer_ids)
self.edge_deletions(e, layer_ids.clone())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

edge_deletions didn't get the &LayerIds treatement

Copy link
Contributor

@fabianmurariu fabianmurariu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ljeub-pometry ljeub-pometry merged commit 217487c into master Mar 21, 2024
10 checks passed
@ljeub-pometry ljeub-pometry deleted the feature/GraphOpsRefactor branch March 21, 2024 17:23
fabianmurariu pushed a commit that referenced this pull request May 21, 2024
* make sure nodes are added in correct order when a node dataframe is provided

* wip paralllel

* major refactor

* fix SCC test (this is a set so order does not matter)

* delete the graph_ops internal trait as it is no longer needed

* eliminate no-longer needed lifetime

* some more cleaning up

* actually pass in the locked graph to the map operations and add par_iter for nodes

* bring back some windowing optimisation

* change the node count iterator

* fix the performance regressions
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants